1. Administration Guide
    1. About This Guide
    2. I Common Tasks
    3. II Booting a Linux System
    4. III System
    5. IV Services
    6. V Mobile Computers
    7. VI Troubleshooting
    8. A Documentation Updates
    9. B An Example Network
    10. C GNU Licenses
  2. Deployment Guide
    1. About This Guide
    2. 1 Planning for SUSE Linux Enterprise Desktop
    3. I Installation Preparation
    4. II The Installation Workflow
    5. III Setting Up an Installation Server
    6. IV Remote Installation
    7. V Initial System Configuration
    8. VI Updating and Upgrading SUSE Linux Enterprise
    9. A Documentation Updates
    10. B GNU Licenses
  3. GNOME User Guide
    1. About This Guide
    2. I Introduction
    3. II Connectivity, Files and Resources
    4. III LibreOffice
    5. IV Internet, Communication and Collaboration
    6. V Graphics and Multimedia
    7. A Help and Documentation
    8. B Documentation Updates
    9. C GNU Licenses
  4. Security Guide
    1. About This Guide
    2. 1 Security and Confidentiality
    3. I Authentication
    4. II Local Security
    5. III Network Security
    6. IV Confining Privileges with AppArmor
    7. V The Linux Audit Framework
    8. A Documentation Updates
    9. B GNU Licenses
  5. System Analysis and Tuning Guide
    1. About This Guide
    2. I Basics
    3. II System Monitoring
    4. III Kernel Monitoring
    5. IV Resource Management
    6. V Kernel Tuning
    7. VI Handling System Dumps
    8. VII Synchronized Clocks with Precision Time Protocol
    9. A Documentation Updates
    10. B GNU Licenses
  6. SMT Guide
    1. About This Guide
    2. 1 SMT Installation
    3. 2 SMT Server Configuration
    4. 3 Mirroring Repositories on the SMT Server
    5. 4 Managing Repositories with YaST SMT Server Management
    6. 5 Managing Client Machines with SMT
    7. 6 SMT Reports
    8. 7 SMT Tools and Configuration Files
    9. 8 Configuring Clients to Use SMT
    10. 9 Advanced Topics
    11. A SMT REST API
    12. B Documentation Updates
  7. Quick Start Manuals
    1. Installation Quick Start
    2. A GNU Licenses
SUSE Linux Enterprise Desktop 12 SP3

SUSE Linux Enterprise Desktop Documentation

SUSE Linux Enterprise Desktop 12 SP3

Administration Guide

Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.

Publication Date: May 07, 2018
About This Guide
Available Documentation
Feedback
Documentation Conventions
About the Making of This Documentation
I Common Tasks
1 Bash and Bash Scripts
1.1 What is The Shell?
1.2 Writing Shell Scripts
1.3 Redirecting Command Events
1.4 Using Aliases
1.5 Using Variables in Bash
1.6 Grouping and Combining Commands
1.7 Working with Common Flow Constructs
1.8 For More Information
2 sudo
2.1 Basic sudo Usage
2.2 Configuring sudo
2.3 Common Use Cases
2.4 More Information
3 YaST Online Update
3.1 The Online Update Dialog
3.2 Installing Patches
3.3 Automatic Online Update
4 YaST
4.1 Advanced Key Combinations
5 YaST in Text Mode
5.1 Navigation in Modules
5.2 Advanced Key Combinations
5.3 Restriction of Key Combinations
5.4 YaST Command Line Options
6 Managing Software with Command Line Tools
6.1 Using Zypper
6.2 RPM—the Package Manager
7 System Recovery and Snapshot Management with Snapper
7.1 Default Setup
7.2 Using Snapper to Undo Changes
7.3 System Rollback by Booting from Snapshots
7.4 Creating and Modifying Snapper Configurations
7.5 Manually Creating and Managing Snapshots
7.6 Automatic Snapshot Clean-Up
7.7 Frequently Asked Questions
8 Remote Access with VNC
8.1 The vncviewer Client
8.2 Remmina: the Remote Desktop Client
8.3 One-time VNC Sessions
8.4 Persistent VNC Sessions
8.5 Encrypted VNC Communication
9 File Copying with RSync
9.1 Conceptual Overview
9.2 Basic Syntax
9.3 Copying Files and Directories Locally
9.4 Copying Files and Directories Remotely
9.5 Configuring and Using an Rsync Server
9.6 For More Information
10 GNOME Configuration for Administrators
10.1 Starting Applications Automatically
10.2 Automounting and Managing Media Devices
10.3 Changing Preferred Applications
10.4 Adding Document Templates
10.5 For More Information
II Booting a Linux System
11 Introduction to the Booting Process
11.1 The Linux Boot Process
11.2 initramfs
11.3 Init on initramfs
12 UEFI (Unified Extensible Firmware Interface)
12.1 Secure Boot
12.2 For More Information
13 The Boot Loader GRUB 2
13.1 Main Differences between GRUB Legacy and GRUB 2
13.2 Configuration File Structure
13.3 Configuring the Boot Loader with YaST
13.4 Differences in Terminal Usage on z Systems
13.5 Helpful GRUB 2 Commands
13.6 More Information
14 The systemd Daemon
14.1 The systemd Concept
14.2 Basic Usage
14.3 System Start and Target Management
14.4 Managing Services with YaST
14.5 Customization of systemd
14.6 Advanced Usage
14.7 More Information
III System
15 32-Bit and 64-Bit Applications in a 64-Bit System Environment
15.1 Runtime Support
15.2 Software Development
15.3 Software Compilation on Biarch Platforms
15.4 Kernel Specifications
16 journalctl: Query the systemd Journal
16.1 Making the Journal Persistent
16.2 journalctl Useful Switches
16.3 Filtering the Journal Output
16.4 Investigating systemd Errors
16.5 Journald Configuration
16.6 Using YaST to Filter the systemd Journal
17 Basic Networking
17.1 IP Addresses and Routing
17.2 IPv6—The Next Generation Internet
17.3 Name Resolution
17.4 Configuring a Network Connection with YaST
17.5 NetworkManager
17.6 Configuring a Network Connection Manually
17.7 Setting Up Bonding Devices
17.8 Setting Up Team Devices for Network Teaming
18 Printer Operation
18.1 The CUPS Workflow
18.2 Methods and Protocols for Connecting Printers
18.3 Installing the Software
18.4 Network Printers
18.5 Configuring CUPS with Command Line Tools
18.6 Printing from the Command Line
18.7 Special Features in SUSE Linux Enterprise Desktop
18.8 Troubleshooting
19 The X Window System
19.1 Installing and Configuring Fonts
19.2 For More Information
20 Accessing File Systems with FUSE
20.1 Configuring FUSE
20.2 Mounting an NTFS Partition
20.3 For More Information
21 Managing Kernel Modules
21.1 Listing Loaded Modules with lsmod and modinfo
21.2 Adding and Removing Kernel Modules
22 Dynamic Kernel Device Management with udev
22.1 The /dev Directory
22.2 Kernel uevents and udev
22.3 Drivers, Kernel Modules and Devices
22.4 Booting and Initial Device Setup
22.5 Monitoring the Running udev Daemon
22.6 Influencing Kernel Device Event Handling with udev Rules
22.7 Persistent Device Naming
22.8 Files used by udev
22.9 For More Information
23 Live Patching the Linux Kernel Using kGraft
23.1 Advantages of kGraft
23.2 Low-level Function of kGraft
23.3 Installing kGraft Patches
23.4 Patch Lifecycle
23.5 Removing a kGraft Patch
23.6 Stuck Kernel Execution Threads
23.7 The kgr Tool
23.8 Scope of kGraft Technology
23.9 Scope of SLE Live Patching
23.10 Interaction with the Support Processes
24 Special System Features
24.1 Information about Special Software Packages
24.2 Virtual Consoles
24.3 Keyboard Mapping
24.4 Language and Country-Specific Settings
IV Services
25 Time Synchronization with NTP
25.1 Configuring an NTP Client with YaST
25.2 Manually Configuring NTP in the Network
25.3 Dynamic Time Synchronization at Runtime
25.4 Setting Up a Local Reference Clock
25.5 Clock Synchronization to an External Time Reference (ETR)
26 Sharing File Systems with NFS
26.1 Terminology
26.2 Installing NFS Server
26.3 Configuring Clients
26.4 For More Information
27 Samba
27.1 Terminology
27.2 Installing a Samba Server
27.3 Configuring a Samba Server
27.4 Configuring Clients
27.5 Samba as Login Server
27.6 Advanced Topics
27.7 For More Information
28 On-Demand Mounting with Autofs
28.1 Installation
28.2 Configuration
28.3 Operation and Debugging
28.4 Auto-Mounting an NFS Share
28.5 Advanced Topics
V Mobile Computers
29 Mobile Computing with Linux
29.1 Laptops
29.2 Mobile Hardware
29.3 Cellular Phones and PDAs
29.4 For More Information
30 Using NetworkManager
30.1 Use Cases for NetworkManager
30.2 Enabling or Disabling NetworkManager
30.3 Configuring Network Connections
30.4 NetworkManager and Security
30.5 Frequently Asked Questions
30.6 Troubleshooting
30.7 For More Information
31 Power Management
31.1 Power Saving Functions
31.2 Advanced Configuration and Power Interface (ACPI)
31.3 Rest for the Hard Disk
31.4 Troubleshooting
31.5 For More Information
VI Troubleshooting
32 Help and Documentation
32.1 Documentation Directory
32.2 Man Pages
32.3 Info Pages
32.4 Online Resources
33 Gathering System Information for Support
33.1 Displaying Current System Information
33.2 Collecting System Information with Supportconfig
33.3 Submitting Information to Global Technical Support
33.4 Analyzing System Information
33.5 Gathering Information during the Installation
33.6 Support of Kernel Modules
33.7 For More Information
34 Common Problems and Their Solutions
34.1 Finding and Gathering Information
34.2 Installation Problems
34.3 Boot Problems
34.4 Login Problems
34.5 Network Problems
34.6 Data Problems
A Documentation Updates
A.1 January 2018 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP3)
A.2 December 2017 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP3)
A.3 September 2017 (Initial Release of SUSE Linux Enterprise Desktop 12 SP3)
A.4 November 2016 (Initial Release of SUSE Linux Enterprise Desktop 12 SP2)
A.5 March 2016 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP1)
A.6 December 2015 (Initial Release of SUSE Linux Enterprise Desktop 12 SP1)
A.7 February 2015 (Documentation Maintenance Update)
A.8 October 2014 (Initial Release of SUSE Linux Enterprise Desktop 12)
B An Example Network
C GNU Licenses
C.1 GNU Free Documentation License

Copyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

About This Guide

  • Filename: sle_admin_intro.xml
  • ID: pre.sle

This guide is intended for use by professional network and system administrators during the operation of SUSE® Linux Enterprise. As such, it is solely concerned with ensuring that SUSE Linux Enterprise is properly configured and that the required services on the network are available to allow it to function properly as initially installed. This guide does not cover the process of ensuring that SUSE Linux Enterprise offers proper compatibility with your enterprise's application software or that its core functionality meets those requirements. It assumes that a full requirements audit has been done and the installation has been requested, or that a test installation for such an audit has been requested.

This guide contains the following:

Support and Common Tasks

SUSE Linux Enterprise offers a wide range of tools to customize various aspects of the system. This part introduces a few of them.

System

Learn more about the underlying operating system by studying this part. SUSE Linux Enterprise supports several hardware architectures and you can use this to adapt your own applications to run on SUSE Linux Enterprise. The boot loader and boot procedure information assists you in understanding how your Linux system works and how your own custom scripts and applications may blend in with it.

Services

SUSE Linux Enterprise is designed to be a network operating system. SUSE® Linux Enterprise Desktop includes client support for many network services. It integrates well into heterogeneous environments including MS Windows clients and servers.

Mobile Computers

Laptops, and the communication between mobile devices like PDAs, or cellular phones and SUSE Linux Enterprise need some special attention. Take care for power conservation and for the integration of different devices into a changing network environment. Also get in touch with the background technologies that provide the needed functionality.

Troubleshooting

Provides an overview of finding help and additional documentation when you need more information or want to perform specific tasks. There is also a list of the most frequent problems with explanations how to fix them.

1 Available Documentation

  • Filename: common_intro_available_doc_i.xml
  • ID: no ID found
Note
Note: Online Documentation and Latest Updates

Documentation for our products is available at http://www.suse.com/documentation/, where you can also find the latest updates, and browse or download the documentation in various formats.

In addition, the product documentation is usually available in your installed system under /usr/share/doc/manual.

The following documentation is available for this product:

Installation Quick Start

Lists the system requirements and guides you step-by-step through the installation of SUSE Linux Enterprise Desktop from DVD, or from an ISO image.

Deployment Guide

Shows how to install single or multiple systems and how to exploit the product inherent capabilities for a deployment infrastructure. Choose from various approaches, ranging from a local installation or a network installation server to a mass deployment using a remote-controlled, highly-customized, and automated installation technique.

Administration Guide

Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.

Security Guide

Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.

System Analysis and Tuning Guide

An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.

GNOME User Guide

Introduces the GNOME desktop of SUSE Linux Enterprise Desktop. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.

2 Feedback

  • Filename: common_intro_feedback_i.xml
  • ID: no ID found

Several feedback channels are available:

Bugs and Enhancement Requests

For services and support options available for your product, refer to http://www.suse.com/support/.

Help for openSUSE is provided by the community. Refer to https://en.opensuse.org/Portal:Support for more information.

To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click Create New.

User Comments

We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/documentation/feedback.html and enter your comments there.

Mail

For feedback on the documentation of this product, you can also send a mail to doc-team@suse.com. Make sure to include the document title, the product version and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).

3 Documentation Conventions

  • Filename: common_intro_typografie_i.xml
  • ID: no ID found

The following notices and typographical conventions are used in this documentation:

  • /etc/passwd: directory names and file names

  • PLACEHOLDER: replace PLACEHOLDER with the actual value

  • PATH: the environment variable PATH

  • ls, --help: commands, options, and parameters

  • user: users or groups

  • package name : name of a package

  • Alt, AltF1: a key to press or a key combination; keys are shown in uppercase as on a keyboard

  • File, File › Save As: menu items, buttons

  • Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.

  • Commands that must be run with root privileges. Often you can also prefix these commands with the sudo command to run them as non-privileged user.

    root # command
    tux > sudo command
  • Commands that can be run by non-privileged users.

    tux > command
  • Notices

    Warning
    Warning: Warning Notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important
    Important: Important Notice

    Important information you should be aware of before proceeding.

    Note
    Note: Note Notice

    Additional information, for example about differences in software versions.

    Tip
    Tip: Tip Notice

    Helpful information, like a guideline or a piece of practical advice.

4 About the Making of This Documentation

  • Filename: common_intro_making_i.xml
  • ID: no ID found

This documentation is written in SUSEDoc, a subset of DocBook 5. The XML source files were validated by jing (see https://code.google.com/p/jing-trang/), processed by xsltproc, and converted into XSL-FO using a customized version of Norman Walsh's stylesheets. The final PDF is formatted through FOP from Apache Software Foundation. The open source tools and the environment used to build this documentation are provided by the DocBook Authoring and Publishing Suite (DAPS). The project's home page can be found at https://github.com/openSUSE/daps.

The XML source code of this documentation can be found at https://github.com/SUSE/doc-sle.

Part I Common Tasks

1 Bash and Bash Scripts

Today, many people use computers with a graphical user interface (GUI) like GNOME. Although they offer lots of features, their use is limited when it comes to the execution of automated tasks. Shells are a good addition to GUIs and this chapter gives you an overview of some aspects of shells, in this case Bash.

2 sudo

Many commands and system utilities need to be run as root to modify files and/or perform tasks that only the super user is allowed to. For security reasons and to avoid accidentally running dangerous commands, it is generally advisable not to log in directly as root. Instead, it is recommended to wo…

3 YaST Online Update

SUSE offers a continuous stream of software security updates for your product. By default, the update applet is used to keep your system up-to-date. Refer to Section 10.5, “Keeping the System Up-to-date” for further information on the update applet. This chapter covers the alternative tool for updat…

4 YaST

YaST is the installation and configuration tool for SUSE Linux Enterprise Desktop. It has a graphical interface and the capability to customize your system quickly during and after the installation. It can be used to set up hardware, configure the network, system services, and tune your security set…

5 YaST in Text Mode

This section is intended for system administrators and experts who do not run an X server on their systems and depend on the text-based installation tool. It provides basic information about starting and operating YaST in text mode.

6 Managing Software with Command Line Tools

This chapter describes Zypper and RPM, two command line tools for managing software. For a definition of the terminology used in this context (for example, repository, patch, or update) refer to Section 10.1, “Definition of Terms”.

7 System Recovery and Snapshot Management with Snapper

Being able to do file system snapshots providing the ability to do rollbacks on Linux is a feature that was often requested in the past. Snapper, with the Btrfs file system or thin-provisioned LVM volumes now fills that gap.

Btrfs, a new copy-on-write file system for Linux, supports file system snapshots (a copy of the state of a subvolume at a certain point of time) of subvolumes (one or more separately mountable file systems within each physical partition). Snapshots are also supported on thin-provisioned LVM volumes formatted with XFS, Ext4 or Ext3. Snapper lets you create and manage these snapshots. It comes with a command line and a YaST interface. Starting with SUSE Linux Enterprise Server 12 it is also possible to boot from Btrfs snapshots—see Section 7.3, “System Rollback by Booting from Snapshots” for more information.

8 Remote Access with VNC

Virtual Network Computing (VNC) enables you to control a remote computer via a graphical desktop (as opposed to a remote shell access). VNC is platform-independent and lets you access the remote machine from any operating system.

SUSE Linux Enterprise Desktop supports two different kinds of VNC sessions: One-time sessions that live as long as the VNC connection from the client is kept up, and persistent sessions that live until they are explicitly terminated.

9 File Copying with RSync

Today, a typical user has several computers: home and workplace machines, a laptop, a smartphone or a tablet. This makes the task of keeping files and documents in sync across multiple devices all more important.

10 GNOME Configuration for Administrators

This chapter introduces GNOME configuration options which administrators can use to adjust system-wide settings, such as customizing menus, installing themes, configuring fonts, changing preferred applications, and locking down capabilities.

1 Bash and Bash Scripts

  • Filename: adm_shell.xml
  • ID: cha.adm.shell
Abstract

Today, many people use computers with a graphical user interface (GUI) like GNOME. Although they offer lots of features, their use is limited when it comes to the execution of automated tasks. Shells are a good addition to GUIs and this chapter gives you an overview of some aspects of shells, in this case Bash.

1.1 What is The Shell?

Traditionally, the shell is Bash (Bourne again Shell). When this chapter speaks about the shell it means Bash. There are actually more available shells than Bash (ash, csh, ksh, zsh, …), each employing different features and characteristics. If you need further information about other shells, search for shell in YaST.

1.1.1 Knowing the Bash Configuration Files

A shell can be invoked as an:

  1. Interactive login shell.  This is used when logging in to a machine, invoking Bash with the --login option or when logging in to a remote machine with SSH.

  2. Ordinary interactive shell.  This is normally the case when starting xterm, konsole, gnome-terminal or similar tools.

  3. Non-interactive shell.  This is used when invoking a shell script at the command line.

Depending on which type of shell you use, different configuration files are being read. The following tables show the login and non-login shell configuration files.

Table 1.1: Bash Configuration Files for Login Shells

File

Description

/etc/profile

Do not modify this file, otherwise your modifications can be destroyed during your next update!

/etc/profile.local

Use this file if you extend /etc/profile

/etc/profile.d/

Contains system-wide configuration files for specific programs

~/.profile

Insert user specific configuration for login shells here

Note that the login shell also sources the configuration files listed under Table 1.2, “Bash Configuration Files for Non-Login Shells”.

Table 1.2: Bash Configuration Files for Non-Login Shells

/etc/bash.bashrc

Do not modify this file, otherwise your modifications can be destroyed during your next update!

/etc/bash.bashrc.local

Use this file to insert your system-wide modifications for Bash only

~/.bashrc

Insert user specific configuration here

Additionally, Bash uses some more files:

Table 1.3: Special Files for Bash

File

Description

~/.bash_history

Contains a list of all commands you have been typing

~/.bash_logout

Executed when logging out

~/.alias

User defined aliases of frequently used commands. See man 1 alias for more details about how to define aliases.

1.1.2 The Directory Structure

  • Filename: fs_structure_i.xml
  • ID: sec.adm.dirstructure

The following table provides a short overview of the most important higher-level directories that you find on a Linux system. Find more detailed information about the directories and important subdirectories in the following list.

Table 1.4: Overview of a Standard Directory Tree

Directory

Contents

/

Root directory—the starting point of the directory tree.

/bin

Essential binary files, such as commands that are needed by both the system administrator and normal users. Usually also contains the shells, such as Bash.

/boot

Static files of the boot loader.

/dev

Files needed to access host-specific devices.

/etc

Host-specific system configuration files.

/home

Holds the home directories of all users who have accounts on the system. However, root's home directory is not located in /home but in /root.

/lib

Essential shared libraries and kernel modules.

/media

Mount points for removable media.

/mnt

Mount point for temporarily mounting a file system.

/opt

Add-on application software packages.

/root

Home directory for the superuser root.

/sbin

Essential system binaries.

/srv

Data for services provided by the system.

/tmp

Temporary files.

/usr

Secondary hierarchy with read-only data.

/var

Variable data such as log files.

/windows

Only available if you have both Microsoft Windows* and Linux installed on your system. Contains the Windows data.

The following list provides more detailed information and gives some examples of which files and subdirectories can be found in the directories:

/bin

Contains the basic shell commands that may be used both by root and by other users. These commands include ls, mkdir, cp, mv, rm and rmdir. /bin also contains Bash, the default shell in SUSE Linux Enterprise Desktop.

/boot

Contains data required for booting, such as the boot loader, the kernel, and other data that is used before the kernel begins executing user-mode programs.

/dev

Holds device files that represent hardware components.

/etc

Contains local configuration files that control the operation of programs like the X Window System. The /etc/init.d subdirectory contains LSB init scripts that can be executed during the boot process.

/home/USERNAME

Holds the private data of every user who has an account on the system. The files located here can only be modified by their owner or by the system administrator. By default, your e-mail directory and personal desktop configuration are located here in the form of hidden files and directories, such as .gconf/ and .config.

Note
Note: Home Directory in a Network Environment

If you are working in a network environment, your home directory may be mapped to a directory in the file system other than /home.

/lib

Contains the essential shared libraries needed to boot the system and to run the commands in the root file system. The Windows equivalent for shared libraries are DLL files.

/media

Contains mount points for removable media, such as CD-ROMs, flash disks, and digital cameras (if they use USB). /media generally holds any type of drive except the hard disk of your system. When your removable medium has been inserted or connected to the system and has been mounted, you can access it from here.

/mnt

This directory provides a mount point for a temporarily mounted file system. root may mount file systems here.

/opt

Reserved for the installation of third-party software. Optional software and larger add-on program packages can be found here.

/root

Home directory for the root user. The personal data of root is located here.

/run

A tmpfs directory used by systemd and various components. /var/run is a symbolic link to /run.

/sbin

As the s indicates, this directory holds utilities for the superuser. /sbin contains the binaries essential for booting, restoring and recovering the system in addition to the binaries in /bin.

/srv

Holds data for services provided by the system, such as FTP and HTTP.

/tmp

This directory is used by programs that require temporary storage of files.

Important
Important: Cleaning up /tmp at Boot Time

Data stored in /tmp is not guaranteed to survive a system reboot. It depends, for example, on settings made in /etc/tmpfiles.d/tmp.conf.

/usr

/usr has nothing to do with users, but is the acronym for Unix system resources. The data in /usr is static, read-only data that can be shared among various hosts compliant with the Filesystem Hierarchy Standard (FHS). This directory contains all application programs including the graphical desktops such as GNOME and establishes a secondary hierarchy in the file system. /usr holds several subdirectories, such as /usr/bin, /usr/sbin, /usr/local, and /usr/share/doc.

/usr/bin

Contains generally accessible programs.

/usr/sbin

Contains programs reserved for the system administrator, such as repair functions.

/usr/local

In this directory the system administrator can install local, distribution-independent extensions.

/usr/share/doc

Holds various documentation files and the release notes for your system. In the manual subdirectory find an online version of this manual. If more than one language is installed, this directory may contain versions of the manuals for different languages.

Under packages find the documentation included in the software packages installed on your system. For every package, a subdirectory /usr/share/doc/packages/PACKAGENAME is created that often holds README files for the package and sometimes examples, configuration files or additional scripts.

If HOWTOs are installed on your system /usr/share/doc also holds the howto subdirectory in which to find additional documentation on many tasks related to the setup and operation of Linux software.

/var

Whereas /usr holds static, read-only data, /var is for data which is written during system operation and thus is variable data, such as log files or spooling data. For an overview of the most important log files you can find under /var/log/, refer to Table 34.1, “Log Files”.

/windows

Only available if you have both Microsoft Windows and Linux installed on your system. Contains the Windows data available on the Windows partition of your system. Whether you can edit the data in this directory depends on the file system your Windows partition uses. If it is FAT32, you can open and edit the files in this directory. For NTFS, SUSE Linux Enterprise Desktop also includes write access support. However, the driver for the NTFS-3g file system has limited functionality.

1.2 Writing Shell Scripts

Shell scripts provide a convenient way to perform a wide range of tasks: collecting data, searching for a word or phrase in a text and other useful things. The following example shows a small shell script that prints a text:

Example 1.1: A Shell Script Printing a Text
#!/bin/sh 1
# Output the following line: 2
echo "Hello World" 3

1

The first line begins with the Shebang characters (#!) which is an indicator that this file is a script. The script is executed with the specified interpreter after the Shebang, in this case /bin/sh.

2

The second line is a comment beginning with the hash sign. It is recommended to comment difficult lines to remember what they do.

3

The third line uses the built-in command echo to print the corresponding text.

Before you can run this script you need some prerequisites:

  1. Every script should contain a Shebang line (as in the example above.) If the line is missing, you need to call the interpreter manually.

  2. You can save the script wherever you want. However, it is a good idea to save it in a directory where the shell can find it. The search path in a shell is determined by the environment variable PATH. Usually a normal user does not have write access to /usr/bin. Therefore it is recommended to save your scripts in the users' directory ~/bin/. The above example gets the name hello.sh.

  3. The script needs executable permissions. Set the permissions with the following command:

    chmod +x ~/bin/hello.sh

If you have fulfilled all of the above prerequisites, you can execute the script in the following ways:

  1. As Absolute Path.  The script can be executed with an absolute path. In our case, it is ~/bin/hello.sh.

  2. Everywhere.  If the PATH environment variable contains the directory where the script is located, you can execute the script with hello.sh.

1.3 Redirecting Command Events

Each command can use three channels, either for input or output:

  • Standard Output.  This is the default output channel. Whenever a command prints something, it uses the standard output channel.

  • Standard Input.  If a command needs input from users or other commands, it uses this channel.

  • Standard Error.  Commands use this channel for error reporting.

To redirect these channels, there are the following possibilities:

Command > File

Saves the output of the command into a file, an existing file will be deleted. For example, the ls command writes its output into the file listing.txt:

ls > listing.txt
Command >> File

Appends the output of the command to a file. For example, the ls command appends its output to the file listing.txt:

ls >> listing.txt
Command < File

Reads the file as input for the given command. For example, the read command reads in the content of the file into the variable:

read a < foo
Command1 | Command2

Redirects the output of the left command as input for the right command. For example, the cat command outputs the content of the /proc/cpuinfo file. This output is used by grep to filter only those lines which contain cpu:

cat /proc/cpuinfo | grep cpu

Every channel has a file descriptor: 0 (zero) for standard input, 1 for standard output and 2 for standard error. It is allowed to insert this file descriptor before a < or > character. For example, the following line searches for a file starting with foo, but suppresses its errors by redirecting it to /dev/null:

find / -name "foo*" 2>/dev/null

1.4 Using Aliases

An alias is a shortcut definition of one or more commands. The syntax for an alias is:

alias NAME=DEFINITION

For example, the following line defines an alias lt that outputs a long listing (option -l), sorts it by modification time (-t), and prints it in reverse sorted order (-r):

alias lt='ls -ltr'

To view all alias definitions, use alias. Remove your alias with unalias and the corresponding alias name.

1.5 Using Variables in Bash

A shell variable can be global or local. Global variables, or environment variables, can be accessed in all shells. In contrast, local variables are visible in the current shell only.

To view all environment variables, use the printenv command. If you need to know the value of a variable, insert the name of your variable as an argument:

printenv PATH

A variable, be it global or local, can also be viewed with echo:

echo $PATH

To set a local variable, use a variable name followed by the equal sign, followed by the value:

PROJECT="SLED"

Do not insert spaces around the equal sign, otherwise you get an error. To set an environment variable, use export:

export NAME="tux"

To remove a variable, use unset:

unset NAME

The following table contains some common environment variables which can be used in you shell scripts:

Table 1.5: Useful Environment Variables

HOME

the home directory of the current user

HOST

the current host name

LANG

when a tool is localized, it uses the language from this environment variable. English can also be set to C

PATH

the search path of the shell, a list of directories separated by colon

PS1

specifies the normal prompt printed before each command

PS2

specifies the secondary prompt printed when you execute a multi-line command

PWD

current working directory

USER

the current user

1.5.1 Using Argument Variables

For example, if you have the script foo.sh you can execute it like this:

foo.sh "Tux Penguin" 2000

To access all the arguments which are passed to your script, you need positional parameters. These are $1 for the first argument, $2 for the second, and so on. You can have up to nine parameters. To get the script name, use $0.

The following script foo.sh prints all arguments from 1 to 4:

#!/bin/sh
echo \"$1\" \"$2\" \"$3\" \"$4\"

If you execute this script with the above arguments, you get:

"Tux Penguin" "2000" "" ""

1.5.2 Using Variable Substitution

Variable substitutions apply a pattern to the content of a variable either from the left or right side. The following list contains the possible syntax forms:

${VAR#pattern}

removes the shortest possible match from the left:

file=/home/tux/book/book.tar.bz2
echo ${file#*/}
home/tux/book/book.tar.bz2
${VAR##pattern}

removes the longest possible match from the left:

file=/home/tux/book/book.tar.bz2
echo ${file##*/}
book.tar.bz2
${VAR%pattern}

removes the shortest possible match from the right:

file=/home/tux/book/book.tar.bz2
echo ${file%.*}
/home/tux/book/book.tar
${VAR%%pattern}

removes the longest possible match from the right:

file=/home/tux/book/book.tar.bz2
echo ${file%%.*}
/home/tux/book/book
${VAR/pattern_1/pattern_2}

substitutes the content of VAR from the PATTERN_1 with PATTERN_2:

file=/home/tux/book/book.tar.bz2
echo ${file/tux/wilber}
/home/wilber/book/book.tar.bz2

1.6 Grouping and Combining Commands

Shells allow you to concatenate and group commands for conditional execution. Each command returns an exit code which determines the success or failure of its operation. If it is 0 (zero) the command was successful, everything else marks an error which is specific to the command.

The following list shows, how commands can be grouped:

Command1 ; Command2

executes the commands in sequential order. The exit code is not checked. The following line displays the content of the file with cat and then prints its file properties with ls regardless of their exit codes:

cat filelist.txt ; ls -l filelist.txt
Command1 && Command2

runs the right command, if the left command was successful (logical AND). The following line displays the content of the file and prints its file properties only, when the previous command was successful (compare it with the previous entry in this list):

cat filelist.txt && ls -l filelist.txt
Command1 || Command2

runs the right command, when the left command has failed (logical OR). The following line creates only a directory in /home/wilber/bar when the creation of the directory in /home/tux/foo has failed:

mkdir /home/tux/foo || mkdir /home/wilber/bar
funcname(){ ... }

creates a shell function. You can use the positional parameters to access its arguments. The following line defines the function hello to print a short message:

hello() { echo "Hello $1"; }

You can call this function like this:

hello Tux

which prints:

Hello Tux

1.7 Working with Common Flow Constructs

To control the flow of your script, a shell has while, if, for and case constructs.

1.7.1 The if Control Command

The if command is used to check expressions. For example, the following code tests whether the current user is Tux:

if test $USER = "tux"; then
  echo "Hello Tux."
else
  echo "You are not Tux."
fi

The test expression can be as complex or simple as possible. The following expression checks if the file foo.txt exists:

if test -e /tmp/foo.txt ; then
  echo "Found foo.txt"
fi

The test expression can also be abbreviated in angled brackets:

if [ -e /tmp/foo.txt ] ; then
  echo "Found foo.txt"
fi

Find more useful expressions at http://www.cyberciti.biz/nixcraft/linux/docs/uniqlinuxfeatures/lsst/ch03sec02.html.

1.7.2 Creating Loops with the for Command

The for loop allows you to execute commands to a list of entries. For example, the following code prints some information about PNG files in the current directory:

for i in *.png; do
 ls -l $i
done

1.8 For More Information

Important information about Bash is provided in the man pages man bash. More about this topic can be found in the following list:

2 sudo

  • Filename: adm_sudo.xml
  • ID: cha.adm.sudo

Many commands and system utilities need to be run as root to modify files and/or perform tasks that only the super user is allowed to. For security reasons and to avoid accidentally running dangerous commands, it is generally advisable not to log in directly as root. Instead, it is recommended to work as a normal, unprivileged user and use the sudo command to run commands with elevated privileges.

On SUSE Linux Enterprise Desktop, sudo is configured by default to work similarly to su. However, sudo offers the possibility to allow users to run commands with privileges of any other user in a highly configurable manner. This can be used to assign roles with specific privileges to certain users and groups. It is for example possible to allow members of the group users to run a command with the privileges of wilber. Access to the command can be further restricted by, for example, forbidding to specify any command options. While su always requires the root password for authentication with PAM, sudo can be configured to authenticate with your own credentials. This increases security by not having to share the root password. For example, you can allow members of the group users to run a command frobnicate as wilber, with the restriction that no arguments are specified. This can be used to assign roles with specific abilities to certain users and groups.

2.1 Basic sudo Usage

sudo is simple to use, yet very powerful.

2.1.1 Running a Single Command

Logged in as normal user, you can run any command as root by adding sudo before it. It will prompt for the root password and, if authenticated successfully, run the command as root:

tux > id -un1
tux
tux > sudo id -un
root's password:2
root
tux > id -un
tux3
tux > sudo id -un
4
root

1

The id -un command prints the login name of the current user.

2

The password is not shown during input, neither as clear text nor as bullets.

3

Only commands started with sudo are run with elevated privileges. If you run the same command without the sudo prefix, it is run with the privileges of the current user again.

4

For a limited amount of time, you do not need to enter the root password again.

Tip
Tip: I/O Redirection

I/O redirection does not work as you would probably expect:

tux >  sudo echo s > /proc/sysrq-trigger
bash: /proc/sysrq-trigger: Permission denied
tux >  sudo cat < /proc/1/maps
bash: /proc/1/maps: Permission denied

Only the echo/cat binary is run with elevated privileges, while the redirection is performed by the user's shell with user privileges. You can either start a shell like in Section 2.1.2, “Starting a Shell” or use the dd utility instead:

echo s | sudo dd of=/proc/sysrq-trigger
sudo dd if=/proc/1/maps | cat

2.1.2 Starting a Shell

Having to add sudo before every command can be cumbersome. While you could specify a shell as a command sudo bash, it is recommended to rather use one of the built-in mechanisms to start a shell:

sudo -s (<command>)

Starts a shell specified by the SHELL environment variable or the target user's default shell. If a command is given, it is passed to the shell (with the -c option), else the shell is run in interactive mode.

tux:~ > sudo -i
root's password:
root:/home/tux # exit
tux:~ > 
sudo -i (<command>)

Like -s, but starts the shell as login shell. This means that the shell's start-up files (.profile etc.) are processed and the current working directory is set to the target user's home directory.

tux:~ > sudo -i
root's password:
root:~ # exit
tux:~ > 

2.1.3 Environment Variables

By default, sudo does not propagate environment variables:

tux > ENVVAR=test env | grep ENVVAR
ENVVAR=test
tux > ENVVAR=test sudo env | grep ENVVAR
root's password:
1
tux > 

1

The empty output shows that the environment variable ENVVAR did not exist in the context of the command run with sudo.

This behavior can be changed by the env_reset option, see Table 2.1, “Useful Flags and Options”.

2.2 Configuring sudo

sudo is a very flexible tool with extensive configuration.

Note
Note: Locked yourself out of sudo

If you accidentally locked yourself out of sudo, use su - and the root password to get a root shell. To fix the error, run visudo.

2.2.1 Editing the Configuration Files

The main policy configuration file for sudo is /etc/sudoers. As it is possible to lock yourself out of the system due to errors in this file, it is strongly recommended to use visudo for editing. It will prevent simultaneous changes to the opened file and check for syntax errors before saving the modifications.

Despite its name, you can also use editors other than vi by setting the EDITOR environment variable, for example:

sudo EDITOR=/usr/bin/nano visudo

However, the /etc/sudoers file itself is supplied by the system packages and modifications may break on updates. Therefore, it is recommended to put custom configuration into files in the /etc/sudoers.d/ directory. Any file in there is automatically included. To create or edit a file in that subdirectory, run:

sudo visudo -f /etc/sudoers.d/NAME

Alternatively with a different editor (for example nano):

sudo EDITOR=/usr/bin/nano visudo -f /etc/sudoers.d/NAME
Note
Note: Ignored Files in /etc/sudoers.d

The #includedir command in /etc/sudoers, used for /etc/sudoers.d, ignores files that end in ~ (tilde) or contain a . (dot).

For more information on the visudo command, run man 8 visudo.

2.2.2 Basic sudoers Configuration Syntax

In the sudoers configuration files, there are two types of options: strings and flags. While strings can contain any value, flags can be turned either ON or OFF. The most important syntax constructs for sudoers configuration files are:

# Everything on a line after a # gets ignored 1
Defaults !insults # Disable the insults flag 2
Defaults env_keep += "DISPLAY HOME" # Add DISPLAY and HOME to env_keep
tux ALL = NOPASSWD: /usr/bin/frobnicate, PASSWD: /usr/bin/journalctl 3

1

There are two exceptions: #include and #includedir are normal commands. Followed by digits, it specifies a UID.

2

Remove the ! to set the specified flag to ON.

3

See Section 2.2.3, “Rules in sudoers”.

Table 2.1: Useful Flags and Options

Option name

Description

Example

targetpw

This flag controls whether the invoking user is required to enter the password of the target user (ON) (for example root) or the invoking user (OFF).

Defaults targetpw # Turn targetpw flag ON
rootpw

If set, sudo will prompt for the root password instead of the target user's or the invoker's. The default is OFF.

Defaults !rootpw # Turn rootpw flag OFF
env_reset

If set, sudo constructs a minimal environment with only TERM, PATH, HOME, MAIL, SHELL, LOGNAME, USER, USERNAME, and SUDO_* set. Additionally, variables listed in env_keep get imported from the calling environment. The default is ON.

Defaults env_reset # Turn env_reset flag ON
env_keep

List of environment variables to keep when the env_reset flag is ON.

# Set env_keep to contain EDITOR and PROMPT
Defaults env_keep = "EDITOR PROMPT"
Defaults env_keep += "JRE_HOME" # Add JRE_HOME
Defaults env_keep -= "JRE_HOME" # Remove JRE_HOME
env_delete

List of environment variables to remove when the env_reset flag is OFF.

# Set env_delete to contain EDITOR and PROMPT
Defaults env_delete = "EDITOR PROMPT"
Defaults env_delete += "JRE_HOME" # Add JRE_HOME
Defaults env_delete -= "JRE_HOME" # Remove JRE_HOME

The Defaults token can also be used to create aliases for a collection of users, hosts, and commands. Furthermore, it is possible to apply an option only to a specific set of users.

For detailed information about the /etc/sudoers configuration file, consult man 5 sudoers.

2.2.3 Rules in sudoers

Rules in the sudoers configuration can be very complex, so this section will only cover the basics. Each rule follows the basic scheme ([] marks optional parts):

#Who      Where         As whom      Tag                What
User_List Host_List = [(User_List)] [NOPASSWD:|PASSWD:] Cmnd_List
Syntax for sudoers Rules
User_List

One or more (separated by ,) identifiers: Either a user name, a group in the format %GROUPNAME or a user ID in the format #UID. Negation can be performed with a ! prefix.

Host_List

One or more (separated by ,) identifiers: Either a (fully qualified) host name or an IP address. Negation can be performed with a ! prefix. ALL is the usual choice for Host_List.

NOPASSWD:|PASSWD:

The user will not be prompted for a password when running commands matching CMDSPEC after NOPASSWD:.

PASSWD is the default, it only needs to be specified when both are on the same line:

tux ALL = PASSWD: /usr/bin/foo, NOPASSWD: /usr/bin/bar
Cmnd_List

One or more (separated by ,) specifiers: A path to an executable, followed by allowed arguments or nothing.

/usr/bin/foo     # Anything allowed
/usr/bin/foo bar # Only "/usr/bin/foo bar" allowed
/usr/bin/foo ""  # No arguments allowed

ALL can be used as User_List, Host_List, and Cmnd_List.

A rule that allows tux to run all commands as root without entering a password:

tux ALL = NOPASSWD: ALL

A rule that allows tux to run systemctl restart apache2:

tux ALL = /usr/bin/systemctl restart apache2

A rule that allows tux to run wall as admin with no arguments:

tux ALL = (admin) /usr/bin/wall ""
Warning
Warning: Dangerous constructs

Constructs of the kind

ALL ALL = ALL

must not be used without Defaults targetpw, otherwise anyone can run commands as root.

2.3 Common Use Cases

Although the default configuration is often sufficient for simple setups and desktop environments, custom configurations can be very useful.

2.3.1 Using sudo without root Password

In cases with special restrictions (user X can only run command Y as root) it is not possible. In other cases, it is still favorable to have some kind of separation. By convention, members of the group wheel can run all commands with sudo as root.

  1. Add yourself to the wheel group

    If your user account is not already member of the wheel group, add it by running sudo usermod -a -G wheel USERNAME and logging out and in again. Verify that the change was successful by running groups USERNAME.

  2. Make authentication with the invoking user's password the default.

    Create the file /etc/sudoers.d/userpw with visudo (see Section 2.2.1, “Editing the Configuration Files”) and add:

    Defaults !targetpw
  3. Select a new default rule.

    Depending on whether you want users to re-enter their passwords, uncomment the specific line in /etc/sudoers and comment out the default rule.

    ## Uncomment to allow members of group wheel to execute any command
    # %wheel ALL=(ALL) ALL
    
    ## Same thing without a password
    # %wheel ALL=(ALL) NOPASSWD: ALL
  4. Make the default rule more restrictive

    Comment out or remove the allow-everything rule in /etc/sudoers:

    ALL     ALL=(ALL) ALL   # WARNING! Only use this together with 'Defaults targetpw'!
    Warning
    Warning: Dangerous rule in sudoers

    Do not forget this step, otherwise any user can execute any command as root!

  5. Test the configuration

    Try to run sudo as member and non-member of wheel.

    tux:~ > groups
    users wheel
    tux:~ > sudo id -un
    tux's password:
    root
    wilber:~ > groups
    users
    wilber:~ > sudo id -un
    wilber is not in the sudoers file.  This incident will be reported.

2.3.2 Using sudo with X.Org Applications

When starting graphical applications with sudo, you will encounter the following error:

tux > sudo xterm
xterm: Xt error: Can't open display: %s
xterm: DISPLAY is not set

YaST will pick the ncurses interface instead of the graphical one.

To use X.Org in applications started with sudo, the environment variables DISPLAY and XAUTHORITY need to be propagated. To configure this, create the file /etc/sudoers.d/xorg, (see Section 2.2.1, “Editing the Configuration Files”) and add the following line:

Defaults env_keep += "DISPLAY XAUTHORITY"

If not set already, set the XAUTHORITY variable as follows:

export XAUTHORITY=~/.Xauthority

Now X.Org applications can be run as usual:

sudo yast2

2.4 More Information

A quick overview about the available command line switches can be retrieved by sudo --help. An explanation and other important information can be found in the man page: man 8 sudo, while the configuration is documented in man 5 sudoers.

3 YaST Online Update

  • Filename: yast2_you.xml
  • ID: cha.onlineupdate.you

SUSE offers a continuous stream of software security updates for your product. By default, the update applet is used to keep your system up-to-date. Refer to Section 10.5, “Keeping the System Up-to-date” for further information on the update applet. This chapter covers the alternative tool for updating software packages: YaST Online Update.

The current patches for SUSE® Linux Enterprise Desktop are available from an update software repository. If you have registered your product during the installation, an update repository is already configured. If you have not registered SUSE Linux Enterprise Desktop, you can do so by starting the Product Registration in YaST. Alternatively, you can manually add an update repository from a source you trust. To add or remove repositories, start the Repository Manager with Software › Software Repositories in YaST. Learn more about the Repository Manager in Section 10.4, “Managing Software Repositories and Services”.

Note
Note: Error on Accessing the Update Catalog

If you are not able to access the update catalog, this might be because of an expired subscription. Normally, SUSE Linux Enterprise Desktop comes with a one-year or three-year subscription, during which you have access to the update catalog. This access will be denied after the subscription ends.

If an access to the update catalog is denied, you will see a warning message prompting you to visit the SUSE Customer Center and check your subscription. The SUSE Customer Center is available at https://scc.suse.com//.

SUSE provides updates with different relevance levels:

Security Updates

Fix severe security hazards and should always be installed.

Recommended Updates

Fix issues that could compromise your computer.

Optional Updates

Fix non-security relevant issues or provide enhancements.

3.1 The Online Update Dialog

To open the YaST Online Update dialog, start YaST and select Software  › Online Update. Alternatively, start it from the command line with yast2 online_update.

The Online Update window consists of four sections.

YaST Online Update
Figure 3.1: YaST Online Update

The Summary section on the left lists the available patches for SUSE Linux Enterprise Desktop. The patches are sorted by security relevance: security, recommended, and optional. You can change the view of the Summary section by selecting one of the following options from Show Patch Category:

Needed Patches (default view)

Non-installed patches that apply to packages installed on your system.

Unneeded Patches

Patches that either apply to packages not installed on your system, or patches that have requirements which have already have been fulfilled (because the relevant packages have already been updated from another source).

All Patches

All patches available for SUSE Linux Enterprise Desktop.

Each list entry in the Summary section consists of a symbol and the patch name. For an overview of the possible symbols and their meaning, press ShiftF1. Actions required by Security and Recommended patches are automatically preset. These actions are Autoinstall, Autoupdate and Autodelete.

If you install an up-to-date package from a repository other than the update repository, the requirements of a patch for this package may be fulfilled with this installation. In this case a check mark is displayed in front of the patch summary. The patch will be visible in the list until you mark it for installation. This will in fact not install the patch (because the package already is up-to-date), but mark the patch as having been installed.

Select an entry in the Summary section to view a short Patch Description at the bottom left corner of the dialog. The upper right section lists the packages included in the selected patch (a patch can consist of several packages). Click an entry in the upper right section to view details about the respective package that is included in the patch.

3.2 Installing Patches

The YaST Online Update dialog allows you to either install all available patches at once or manually select the desired patches. You may also revert patches that have been applied to the system.

By default, all new patches (except optional ones) that are currently available for your system are already marked for installation. They will be applied automatically once you click Accept or Apply. If one or multiple patches require a system reboot, you will be notified about this before the patch installation starts. You can then either decide to continue with the installation of the selected patches, skip the installation of all patches that need rebooting and install the rest, or go back to the manual patch selection.

Procedure 3.1: Applying Patches with YaST Online Update
  1. Start YaST and select Software › Online Update.

  2. To automatically apply all new patches (except optional ones) that are currently available for your system, press Apply or Accept.

  3. First modify the selection of patches that you want to apply:

    1. Use the respective filters and views that the interface provides. For details, refer to Section 3.1, “The Online Update Dialog”.

    2. Select or deselect patches according to your needs and wishes by right-clicking the patch and choosing the respective action from the context menu.

      Important
      Important: Always Apply Security Updates

      Do not deselect any security-related patches without a very good reason. These patches fix severe security hazards and prevent your system from being exploited.

    3. Most patches include updates for several packages. If you want to change actions for single packages, right-click a package in the package view and choose an action.

    4. To confirm your selection and apply the selected patches, proceed with Apply or Accept.

  4. After the installation is complete, click Finish to leave the YaST Online Update. Your system is now up-to-date.

3.3 Automatic Online Update

YaST also offers the possibility to set up an automatic update with daily, weekly or monthly schedule. To use the respective module, you need to install the yast2-online-update-configuration package first.

By default, updates are downloaded as delta RPMs. Since rebuilding RPM packages from delta RPMs is a memory- and processor-intensive task, certain setups or hardware configurations might require you to disable the use of delta RPMs for the sake of performance.

Some patches, such as kernel updates or packages requiring license agreements, require user interaction, which would cause the automatic update procedure to stop. You can configure to skip patches that require user interaction.

Procedure 3.2: Configuring the Automatic Online Update
  1. After installation, start YaST and select Software › Online Update Configuration.

    Alternatively, start the module with yast2 online_update_configuration from the command line.

  2. Activate Automatic Online Update.

  3. Choose the update interval: Daily, Weekly, or Monthly.

  4. To automatically accept any license agreements, activate Agree with Licenses.

  5. Select if you want to Skip Interactive Patches in case you want the update procedure to proceed fully automatically.

    Important
    Important: Skipping Patches

    If you select to skip any packages that require interaction, run a manual Online Update occasionally to install those patches, too. Otherwise you might miss important patches.

  6. To automatically install all packages recommended by updated packages, activate Include Recommended Packages.

  7. To disable the use of delta RPMs (for performance reasons), deactivate Use Delta RPMs.

  8. To filter the patches by category (such as security or recommended), activate Filter by Category and add the appropriate patch categories from the list. Only patches of the selected categories will be installed. Others will be skipped.

  9. Confirm your configuration with OK.

The automatic online update does not automatically restart the system afterward. If there are package updates that require a system reboot, you need to do this manually.

4 YaST

  • Filename: yast2_gui.xml
  • ID: cha.yast.gui

YaST is the installation and configuration tool for SUSE Linux Enterprise Desktop. It has a graphical interface and the capability to customize your system quickly during and after the installation. It can be used to set up hardware, configure the network, system services, and tune your security settings.

4.1 Advanced Key Combinations

YaST has a set of advanced key combinations.

Print Screen

Take and save a screenshot. May not be available when YaST is running under some desktop environments.

ShiftF4

Enable/disable the color palette optimized for vision impaired users.

ShiftF7

Enable/disable logging of debug messages.

ShiftF8

Open a file dialog to save log files to a non-standard location.

CtrlShiftAltD

Send a DebugEvent. YaST modules can react to this by executing special debugging actions. The result depends on the specific YaST module.

CtrlShiftAltM

Start/stop macro recorder.

CtrlShiftAltP

Replay macro.

CtrlShiftAltS

Show style sheet editor.

CtrlShiftAltT

Dump widget tree to the log file.

CtrlShiftAltX

Open a terminal window (xterm). Useful for installation process via VNC.

CtrlShiftAltY

Show widget tree browser.

5 YaST in Text Mode

  • Filename: yast2_ncurses.xml
  • ID: cha.yast.text

This section is intended for system administrators and experts who do not run an X server on their systems and depend on the text-based installation tool. It provides basic information about starting and operating YaST in text mode.

YaST in text mode uses the ncurses library to provide an easy pseudo-graphical user interface. The ncurses library is installed by default. The minimum supported size of the terminal emulator in which to run YaST is 80x25 characters.

Main Window of YaST in Text Mode
Figure 5.1: Main Window of YaST in Text Mode

When you start YaST in text mode, the YaST control center appears (see Figure 5.1). The main window consists of three areas. The left frame features the categories to which the various modules belong. This frame is active when YaST is started and therefore it is marked by a bold white border. The active category is selected. The right frame provides an overview of the modules available in the active category. The bottom frame contains the buttons for Help and Quit.

When you start the YaST control center, the category Software is selected automatically. Use and to change the category. To select a module from the category, activate the right frame with and then use and to select the module. Keep the arrow keys pressed to scroll through the list of available modules. The selected module is selected. Press Enter to start the active module.

Various buttons or selection fields in the module contain a highlighted letter (yellow by default). Use Althighlighted_letter to select a button directly instead of navigating there with →|. Exit the YaST control center by pressing AltQ or by selecting Quit and pressing Enter.

Tip
Tip: Refreshing YaST Dialogs

If a YaST dialog gets corrupted or distorted (for example, while resizing the window), press CtrlL to refresh and restore its contents.

5.1 Navigation in Modules

The following description of the control elements in the YaST modules assumes that all function keys and Alt key combinations work and are not assigned to different global functions. Read Section 5.3, “Restriction of Key Combinations” for information about possible exceptions.

Navigation among Buttons and Selection Lists

Use →| to navigate among the buttons and frames containing selection lists. To navigate in reverse order, use Alt→| or Shift→| combinations.

Navigation in Selection Lists

Use the arrow keys ( and ) to navigate among the individual elements in an active frame containing a selection list. If individual entries within a frame exceed its width, use Shift or Shift to scroll horizontally to the right and left. Alternatively, use CtrlE or CtrlA. This combination can also be used if using or results in changing the active frame or the current selection list, as in the control center.

Buttons, Radio Buttons, and Check Boxes

To select buttons with empty square brackets (check boxes) or empty parentheses (radio buttons), press Space or Enter. Alternatively, radio buttons and check boxes can be selected directly with Althighlighted_letter. In this case, you do not need to confirm with Enter. If you navigate to an item with →|, press Enter to execute the selected action or activate the respective menu item.

Function Keys

The function keys (F1 ... F12) enable quick access to the various buttons. Available function key combinations (FX) are shown in the bottom line of the YaST screen. Which function keys are actually mapped to which buttons depend on the active YaST module, because the different modules offer different buttons (Details, Info, Add, Delete, etc.). Use F10 for Accept, OK, Next, and Finish. Press F1 to access the YaST help.

Using Navigation Tree in ncurses Mode

Some YaST modules use a navigation tree in the left part of the window to select configuration dialogs. Use the arrow keys ( and ) to navigate in the tree. Use Space to open or close tree items. In ncurses mode, Enter must be pressed after a selection in the navigation tree to show the selected dialog. This is an intentional behavior to save time consuming redraws when browsing through the navigation tree.

Selecting Software in the Software Installation Module

Use the filters on the left side to limit the amount of displayed packages. Installed packages are marked with the letter i. To change the status of a package, press Space or Enter. Alternatively, use the Actions menu to select the needed status change (install, delete, update, taboo or lock).

The Software Installation Module
Figure 5.2: The Software Installation Module

5.2 Advanced Key Combinations

YaST in text mode has a set of advanced key combinations.

ShiftF1

Show a list of advanced hotkeys.

ShiftF4

Change color schema.

Ctrl\

Quit the application.

CtrlL

Refresh screen.

CtrlD F1

Show a list of advanced hotkeys.

CtrlD ShiftD

Dump dialog to the log file as a screenshot.

CtrlD ShiftY

Open YDialogSpy to see the widget hierarchy.

5.3 Restriction of Key Combinations

If your window manager uses global Alt combinations, the Alt combinations in YaST might not work. Keys like Alt or Shift can also be occupied by the settings of the terminal.

Replacing Alt with Esc

Alt shortcuts can be executed with Esc instead of Alt. For example, EscH replaces AltH. (First press Esc, then press H.)

Backward and Forward Navigation with CtrlF and CtrlB

If the Alt and Shift combinations are occupied by the window manager or the terminal, use the combinations CtrlF (forward) and CtrlB (backward) instead.

Restriction of Function Keys

The function keys (F1 ... F12) are also used for functions. Certain function keys might be occupied by the terminal and may not be available for YaST. However, the Alt key combinations and function keys should always be fully available on a pure text console.

5.4 YaST Command Line Options

Besides the text mode interface, YaST provides a pure command line interface. To get a list of YaST command line options, enter:

yast -h

5.4.1 Starting the Individual Modules

To save time, the individual YaST modules can be started directly. To start a module, enter:

yast <module_name>

View a list of all module names available on your system with yast -l or yast --list. Start the network module, for example, with yast lan.

5.4.2 Installing Packages from the Command Line

If you know a package name and the package is provided by any of your active installation repositories, you can use the command line option -i to install the package:

yast -i <package_name>

or

yast --install <package_name>

PACKAGE_NAME can be a single short package name (for example gvim) installed with dependency checking, or the full path to an RPM package which is installed without dependency checking.

If you need a command line based software management utility with functionality beyond what YaST provides, consider using Zypper. This utility uses the same software management library that is also the foundation for the YaST package manager. The basic usage of Zypper is covered in Section 6.1, “Using Zypper”.

5.4.3 Command Line Parameters of the YaST Modules

To use YaST functionality in scripts, YaST provides command line support for individual modules. Not all modules have command line support. To display the available options of a module, enter:

yast <module_name> help

If a module does not provide command line support, the module is started in text mode and the following message appears:

This YaST module does not support the command line interface.

6 Managing Software with Command Line Tools

  • Filename: sw_managing_commandline.xml
  • ID: cha.sw_cl
Abstract

This chapter describes Zypper and RPM, two command line tools for managing software. For a definition of the terminology used in this context (for example, repository, patch, or update) refer to Section 10.1, “Definition of Terms”.

6.1 Using Zypper

  • Filename: zypper.xml
  • ID: sec.zypper

Zypper is a command line package manager for installing, updating and removing packages a well as for managing repositories. It is especially useful for accomplishing remote software management tasks or managing software from shell scripts.

6.1.1 General Usage

The general syntax of Zypper is:

zypper [--global-options] COMMAND  [--command-options] [arguments]

The components enclosed in brackets are not required. See zypper help for a list of general options and all commands. To get help for a specific command, type zypper help COMMAND.

Zypper Commands

The simplest way to execute Zypper is to type its name, followed by a command. For example, to apply all needed patches to the system, use:

tux > sudo zypper patch
Global Options

Additionally, you can choose from one or more global options by typing them immediately before the command:

tux > sudo zypper --non-interactive patch

In the above example, the option --non-interactive means that the command is run without asking anything (automatically applying the default answers).

Command-Specific Options

To use options that are specific to a particular command, type them immediately after the command:

tux > sudo zypper patch --auto-agree-with-licenses

In the above example, --auto-agree-with-licenses is used to apply all needed patches to a system without you being asked to confirm any licenses. Instead, license will be accepted automatically.

Arguments

Some commands require one or more arguments. For example, when using the command install, you need to specify which package or which packages you want to install:

tux > sudo zypper install mplayer

Some options also require a single argument. The following command will list all known patterns:

tux > zypper search -t pattern

You can combine all of the above. For example, the following command will install the aspell-de and aspell-fr packages from the factory repository while being verbose:

tux > sudo zypper -v install --from factory aspell-de aspell-fr

The --from option makes sure to keep all repositories enabled (for solving any dependencies) while requesting the package from the specified repository.

Most Zypper commands have a dry-run option that does a simulation of the given command. It can be used for test purposes.

tux > sudo zypper remove --dry-run MozillaFirefox

Zypper supports the global --userdata STRING option. You can specify a string with this option, which gets written to Zypper's log files and plug-ins (such as the Btrfs plug-in). It can be used to mark and identify transactions in log files.

tux > sudo zypper --userdata STRING patch

6.1.2 Installing and Removing Software with Zypper

To install or remove packages, use the following commands:

tux > sudo zypper install PACKAGE_NAME
sudo zypper remove PACKAGE_NAME
Warning
Warning: Do Not Remove Mandatory System Packages

Do not remove mandatory system packages like glibc , zypper , kernel . If they are removed, the system can become unstable or stop working altogether.

6.1.2.1 Selecting Which Packages to Install or Remove

There are various ways to address packages with the commands zypper install and zypper remove.

By Exact Package Name
tux > sudo zypper install MozillaFirefox
By Exact Package Name and Version Number
tux > sudo zypper install MozillaFirefox-52.2
By Repository Alias and Package Name
tux > sudo zypper install mozilla:MozillaFirefox

Where mozilla is the alias of the repository from which to install.

By Package Name Using Wild Cards

You can select all packages that have names starting or ending with a certain string. Use wild cards with care, especially when removing packages. The following command will install all packages starting with Moz:

tux > sudo zypper install 'Moz*'
Tip
Tip: Removing all -debuginfo Packages

When debugging a problem, you sometimes need to temporarily install a lot of -debuginfo packages which give you more information about running processes. After your debugging session finishes and you need to clean the environment, run the following:

tux > sudo zypper remove '*-debuginfo'
By Capability

For example, if you want to install a Perl module without knowing the name of the package, capabilities come in handy:

tux > sudo zypper install firefox
By Capability, Hardware Architecture, or Version

Together with a capability, you can specify a hardware architecture and a version:

  • The name of the desired hardware architecture is appended to the capability after a full stop. For example, to specify the AMD64/Intel 64 architectures (which in Zypper is named x86_64), use:

    tux > sudo zypper install 'firefox.x86_64'
  • Versions must be appended to the end of the string and must be preceded by an operator: < (lesser than), <= (lesser than or equal), = (equal), >= (greater than or equal), > (greater than).

    tux > sudo zypper install 'firefox>=52.2'
  • You can also combine a hardware architecture and version requirement:

    tux > sudo zypper install 'firefox.x86_64>=52.2'
By Path to the RPM file

You can also specify a local or remote path to a package:

tux > sudo zypper install /tmp/install/MozillaFirefox.rpm
tux > sudo zypper install http://download.example.com/MozillaFirefox.rpm

6.1.2.2 Combining Installation and Removal of Packages

To install and remove packages simultaneously, use the +/- modifiers. To install emacs and simultaneously remove vim , use:

tux > sudo zypper install emacs -vim

To remove emacs and simultaneously install vim , use:

tux > sudo zypper remove emacs +vim

To prevent the package name starting with the - being interpreted as a command option, always use it as the second argument. If this is not possible, precede it with --:

tux > sudo zypper install -emacs +vim       # Wrong
tux > sudo zypper install vim -emacs        # Correct
tux > sudo zypper install -- -emacs +vim    # Correct
tux > sudo zypper remove emacs +vim         # Correct

6.1.2.3 Cleaning Up Dependencies of Removed Packages

If (together with a certain package), you automatically want to remove any packages that become unneeded after removing the specified package, use the --clean-deps option:

tux > sudo zypper rm PACKAGE_NAME --clean-deps

6.1.2.4 Using Zypper in Scripts

By default, Zypper asks for a confirmation before installing or removing a selected package, or when a problem occurs. You can override this behavior using the --non-interactive option. This option must be given before the actual command (install, remove, and patch), as can be seen in the following:

tux > sudo zypper --non-interactive install PACKAGE_NAME

This option allows the use of Zypper in scripts and cron jobs.

6.1.2.5 Installing or Downloading Source Packages

To install the corresponding source package of a package, use:

tux > zypper source-install PACKAGE_NAME

When executed as root, the default location to install source packages is /usr/src/packages/ and ~/rpmbuild when run as user. These values can be changed in your local rpm configuration.

This command will also install the build dependencies of the specified package. If you do not want this, add the switch -D:

tux > sudo zypper source-install -D PACKAGE_NAME

To install only the build dependencies use -d.

tux > sudo zypper source-install -d PACKAGE_NAME

Of course, this will only work if you have the repository with the source packages enabled in your repository list (it is added by default, but not enabled). See Section 6.1.5, “Managing Repositories with Zypper” for details on repository management.

A list of all source packages available in your repositories can be obtained with:

tux > zypper search -t srcpackage

You can also download source packages for all installed packages to a local directory. To download source packages, use:

tux > zypper source-download

The default download directory is /var/cache/zypper/source-download. You can change it using the --directory option. To only show missing or extraneous packages without downloading or deleting anything, use the --status option. To delete extraneous source packages, use the --delete option. To disable deleting, use the --no-delete option.

6.1.2.6 Installing Packages from Disabled Repositories

Normally you can only install or refresh packages from enabled repositories. The --plus-content TAG option helps you specify repositories to be refreshed, temporarily enabled during the current Zypper session, and disabled after it completes.

For example, to enable repositories that may provide additional -debuginfo or -debugsource packages, use --plus-content debug. You can specify this option multiple times.

To temporarily enable such 'debug' repositories to install a specific -debuginfo package, use the option as follows:

tux > sudo zypper --plus-content debug \
   install "debuginfo(build-id)=eb844a5c20c70a59fc693cd1061f851fb7d046f4"

The build-id string is reported by gdb for missing debuginfo packages.

6.1.2.7 Utilities

To verify whether all dependencies are still fulfilled and to repair missing dependencies, use:

tux > zypper verify

In addition to dependencies that must be fulfilled, some packages recommend other packages. These recommended packages are only installed if actually available and installable. In case recommended packages were made available after the recommending package has been installed (by adding additional packages or hardware), use the following command:

tux > sudo zypper install-new-recommends

This command is very useful after plugging in a Web cam or Wi-Fi device. It will install drivers for the device and related software, if available. Drivers and related software are only installable if certain hardware dependencies are fulfilled.

6.1.3 Updating Software with Zypper

There are three different ways to update software using Zypper: by installing patches, by installing a new version of a package or by updating the entire distribution. The latter is achieved with zypper dist-upgrade. Upgrading SUSE Linux Enterprise Desktop is discussed in Chapter 16, Upgrading SUSE Linux Enterprise .

6.1.3.1 Installing All Needed Patches

To install all officially released patches that apply to your system, run:

tux > sudo zypper patch

All patches available from repositories configured on your computer are checked for their relevance to your installation. If they are relevant (and not classified as optional or feature), they are installed immediately. Note that the official update repository is only available after registering your SUSE Linux Enterprise Desktop installation.

If a patch that is about to be installed includes changes that require a system reboot, you will be warned before.

The plain zypper patch command does not apply patches from third party repositories. To update also the third party repositories, use the with-update command option as follows:

tux > sudo zypper patch --with update

To install also optional patches, use:

tux > sudo zypper patch --with-optional

To install all patches relating to a specific Bugzilla issue, use:

tux > sudo zypper patch --bugzilla=NUMBER

To install all patches relating to a specific CVE database entry, use:

tux > sudo zypper patch --cve=NUMBER

For example, to install a security patch with the CVE number CVE-2010-2713, execute:

tux > sudo zypper patch --cve=CVE-2010-2713

To install only patches which affect Zypper and the package management itself, use:

tux > sudo zypper patch --updatestack-only

Bear in mind that other command options that would also update other repositories will be dropped if you use the updatestack-only command option.

6.1.3.2 Listing Patches

To find out whether patches are available, Zypper allows viewing the following information:

Number of Needed Patches

To list the number of needed patches (patches that apply to your system but are not yet installed), use patch-check:

tux > zypper patch-check
Loading repository data...
Reading installed packages...
5 patches needed (1 security patch)

This command can be combined with the --updatestack-only option to list only the patches which affect Zypper and the package management itself.

List of Needed Patches

To list all needed patches (patches that apply to your system but are not yet installed), use list-patches:

tux > zypper list-patches
Loading repository data...
Reading installed packages...

Repository     | Name        | Version | Category | Status  | Summary
---------------+-------------+---------+----------+---------+---------
SLES12-Updates | SUSE-2014-8 | 1       | security | needed  | openssl: Update for OpenSSL
List of All Patches

To list all patches available for SUSE Linux Enterprise Desktop, regardless of whether they are already installed or apply to your installation, use zypper patches.

It is also possible to list and install patches relevant to specific issues. To list specific patches, use the zypper list-patches command with the following options:

By Bugzilla Issues

To list all needed patches that relate to Bugzilla issues, use the option --bugzilla.

To list patches for a specific bug, you can also specify a bug number: --bugzilla=NUMBER. To search for patches relating to multiple Bugzilla issues, add commas between the bug numbers, for example:

tux > zypper list-patches --bugzilla=972197,956917
By CVE Number

To list all needed patches that relate to an entry in the CVE database (Common Vulnerabilities and Exposures), use the option --cve.

To list patches for a specific CVE database entry, you can also specify a CVE number: --cve=NUMBER. To search for patches relating to multiple CVE database entries, add commas between the CVE numbers, for example:

tux > zypper list-patches --bugzilla=CVE-2016-2315,CVE-2016-2324

To list all patches regardless of whether they are needed, use the option --all additionally. For example, to list all patches with a CVE number assigned, use:

tux > zypper list-patches --all --cve
Issue | No.           | Patch             | Category    | Severity  | Status
------+---------------+-------------------+-------------+-----------+----------
cve   | CVE-2015-0287 | SUSE-SLE-Module.. | recommended | moderate  | needed
cve   | CVE-2014-3566 | SUSE-SLE-SERVER.. | recommended | moderate  | not needed
[...]

6.1.3.3 Installing New Package Versions

If a repository contains only new packages, but does not provide patches, zypper patch does not show any effect. To update all installed packages with newer available versions (while maintaining system integrity), use:

tux > sudo zypper update

To update individual packages, specify the package with either the update or install command:

tux > sudo zypper update PACKAGE_NAME
sudo zypper install PACKAGE_NAME

A list of all new installable packages can be obtained with the command:

tux > zypper list-updates

Note that this command only lists packages that match the following criteria:

  • has the same vendor like the already installed package,

  • is provided by repositories with at least the same priority than the already installed package,

  • is installable (all dependencies are satisfied).

A list of all new available packages (regardless whether installable or not) can be obtained with:

tux > sudo zypper list-updates --all

To find out why a new package cannot be installed, use the zypper install or zypper update command as described above.

6.1.3.4 Identifying Orphaned Packages

Whenever you remove a repository from Zypper or upgrade your system, some packages can get in an orphaned state. These orphaned packages belong to no active repository anymore. The following command gives you a list of these:

tux > sudo zypper packages --orphaned

With this list, you can decide if a package is still needed or can be removed safely.

6.1.4 Identifying Processes and Services Using Deleted Files

When patching, updating or removing packages, there may be running processes on the system which continue to use files having been deleted by the update or removal. Use zypper ps to list processes using deleted files. In case the process belongs to a known service, the service name is listed, making it easy to restart the service. By default zypper ps shows a table:

tux > zypper ps
PID   | PPID | UID | User  | Command      | Service      | Files
------+------+-----+-------+--------------+--------------+-------------------
814   | 1    | 481 | avahi | avahi-daemon | avahi-daemon | /lib64/ld-2.19.s->
      |      |     |       |              |              | /lib64/libdl-2.1->
      |      |     |       |              |              | /lib64/libpthrea->
      |      |     |       |              |              | /lib64/libc-2.19->
[...]
PID: ID of the process
PPID: ID of the parent process
UID: ID of the user running the process
Login: Login name of the user running the process
Command: Command used to execute the process
Service: Service name (only if command is associated with a system service)
Files: The list of the deleted files

The output format of zypper ps can be controlled as follows:

zypper ps-s

Create a short table not showing the deleted files.

tux > zypper ps -s
PID   | PPID | UID  | User    | Command      | Service
------+------+------+---------+--------------+--------------
814   | 1    | 481  | avahi   | avahi-daemon | avahi-daemon
817   | 1    | 0    | root    | irqbalance   | irqbalance
1567  | 1    | 0    | root    | sshd         | sshd
1761  | 1    | 0    | root    | master       | postfix
1764  | 1761 | 51   | postfix | pickup       | postfix
1765  | 1761 | 51   | postfix | qmgr         | postfix
2031  | 2027 | 1000 | tux     | bash         |
zypper ps-ss

Show only processes associated with a system service.

PID   | PPID | UID  | User    | Command      | Service
------+------+------+---------+--------------+--------------
814   | 1    | 481  | avahi   | avahi-daemon | avahi-daemon
817   | 1    | 0    | root    | irqbalance   | irqbalance
1567  | 1    | 0    | root    | sshd         | sshd
1761  | 1    | 0    | root    | master       | postfix
1764  | 1761 | 51   | postfix | pickup       | postfix
1765  | 1761 | 51   | postfix | qmgr         | postfix
zypper ps-sss

Only show system services using deleted files.

avahi-daemon
irqbalance
postfix
sshd
zypper ps--print "systemctl status %s"

Show the commands to retrieve status information for services which might need a restart.

systemctl status avahi-daemon
systemctl status irqbalance
systemctl status postfix
systemctl status sshd

For more information about service handling refer to Chapter 14, The systemd Daemon.

6.1.5 Managing Repositories with Zypper

All installation or patch commands of Zypper rely on a list of known repositories. To list all repositories known to the system, use the command:

tux > zypper repos

The result will look similar to the following output:

Example 6.1: Zypper—List of Known Repositories
tux > zypper repos
# | Alias        | Name          | Enabled | Refresh
--+--------------+---------------+---------+--------
1 | SLEHA-12-GEO | SLEHA-12-GEO  | Yes     | No
2 | SLEHA-12     | SLEHA-12      | Yes     | No
3 | SLES12       | SLES12        | Yes     | No

When specifying repositories in various commands, an alias, URI or repository number from the zypper repos command output can be used. A repository alias is a short version of the repository name for use in repository handling commands. Note that the repository numbers can change after modifying the list of repositories. The alias will never change by itself.

By default, details such as the URI or the priority of the repository are not displayed. Use the following command to list all details:

tux > zypper repos -d

6.1.5.1 Adding Repositories

To add a repository, run

tux > sudo zypper addrepo URI ALIAS

URI can either be an Internet repository, a network resource, a directory or a CD or DVD (see http://en.opensuse.org/openSUSE:Libzypp_URIs for details). The ALIAS is a shorthand and unique identifier of the repository. You can freely choose it, with the only exception that it needs to be unique. Zypper will issue a warning if you specify an alias that is already in use.

6.1.5.2 Refreshing Repositories

zypper enables you to fetch changes in packages from configured repositories. To fetch the changes, run:

tux > sudo zypper refresh
Note
Note: Default Behavior of zypper

By default, some commands perform refresh automatically, so you do not need to run the command explicitly.

The refresh command enables you to view changes also in disabled repositories, by using the --plus-content option:

tux > sudo zypper --plus-content refresh

This option fetches changes in repositories, but keeps the disabled repositories in the same state—disabled.

6.1.5.3 Removing Repositories

To remove a repository from the list, use the command zypper removerepo together with the alias or number of the repository you want to delete. For example, to remove the repository SLEHA-12-GEO from Example 6.1, “Zypper—List of Known Repositories”, use one of the following commands:

tux > sudo zypper removerepo 1
tux > sudo zypper removerepo "SLEHA-12-GEO"

6.1.5.4 Modifying Repositories

Enable or disable repositories with zypper modifyrepo. You can also alter the repository's properties (such as refreshing behavior, name or priority) with this command. The following command will enable the repository named updates, turn on auto-refresh and set its priority to 20:

tux > sudo zypper modifyrepo -er -p 20 'updates'

Modifying repositories is not limited to a single repository—you can also operate on groups:

-a: all repositories
-l: local repositories
-t: remote repositories
-m TYPE: repositories of a certain type (where TYPE can be one of the following: http, https, ftp, cd, dvd, dir, file, cifs, smb, nfs, hd, iso)

To rename a repository alias, use the renamerepo command. The following example changes the alias from Mozilla Firefox to firefox:

tux > sudo zypper renamerepo 'Mozilla Firefox' firefox

6.1.6 Querying Repositories and Packages with Zypper

Zypper offers various methods to query repositories or packages. To get lists of all products, patterns, packages or patches available, use the following commands:

tux > zypper products
tux > zypper patterns
tux > zypper packages
tux > zypper patches

To query all repositories for certain packages, use search. To get information regarding particular packages, use the info command.

6.1.6.1 zypper search Usage

The zypper search command works on package names, or, optionally, on package summaries and descriptions. String wrapped in / are interpreted as regular expressions. By default, the search is not case-sensitive.

Simple search for a package name containing fire
tux > zypper search "fire"
Simple search for the exact package MozillaFirefox
tux > zypper search --match-exact "MozillaFirefox"
Also search in package descriptions and summaries
tux > zypper search -d fire
Only display packages not already installed
tux > zypper search -u fire
Display packages containing the string fir not followed be e
tux > zypper se "/fir[^e]/"

6.1.6.2 zypper what-provides Usage

To search for packages which provide a special capability, use the command what-provides. For example, if you want to know which package provides the Perl module SVN::Core, use the following command:

tux > zypper what-provides 'perl(SVN::Core)'

The what-provides PACKAGE_NAME is similar to rpm -q --whatprovides PACKAGE_NAME, but RPM is only able to query the RPM database (that is the database of all installed packages). Zypper, on the other hand, will tell you about providers of the capability from any repository, not only those that are installed.

6.1.6.3 zypper info Usage

To query single packages, use info with an exact package name as an argument. This displays detailed information about a package. In case the package name does not match any package name from repositories, the command outputs detailed information for non-package matches. If you request a specific type (by using the -t option) and the type does not exist, the command outputs other available matches but without detailed information.

If you specify a source package, the command displays binary packages built from the source package. If you specify a binary package, the command outputs the source packages used to build the binary package.

To also show what is required/recommended by the package, use the options --requires and --recommends:

tux > zypper info --requires MozillaFirefox

6.1.7 Configuring Zypper

Zypper now comes with a configuration file, allowing you to permanently change Zypper's behavior (either system-wide or user-specific). For system-wide changes, edit /etc/zypp/zypper.conf. For user-specific changes, edit ~/.zypper.conf. If ~/.zypper.conf does not yet exist, you can use /etc/zypp/zypper.conf as a template: copy it to ~/.zypper.conf and adjust it to your liking. Refer to the comments in the file for help about the available options.

6.1.8 Troubleshooting

If you have trouble accessing packages from configured repositories (for example, Zypper cannot find a certain package even though you know it exists in one the repositories), refreshing the repositories may help:

tux > sudo zypper refresh

If that does not help, try

tux > sudo zypper refresh -fdb

This forces a complete refresh and rebuild of the database, including a forced download of raw metadata.

6.1.9 Zypper Rollback Feature on Btrfs File System

If the Btrfs file system is used on the root partition and snapper is installed, Zypper automatically calls snapper when committing changes to the file system to create appropriate file system snapshots. These snapshots can be used to revert any changes made by Zypper. See Chapter 7, System Recovery and Snapshot Management with Snapper for more information.

6.1.10 For More Information

For more information on managing software from the command line, enter zypper help, zypper help  COMMAND or refer to the zypper(8) man page. For a complete and detailed command reference, cheat sheets with the most important commands, and information on how to use Zypper in scripts and applications, refer to http://en.opensuse.org/SDB:Zypper_usage. A list of software changes for the latest SUSE Linux Enterprise Desktop version can be found at http://en.opensuse.org/openSUSE:Zypper versions.

6.2 RPM—the Package Manager

  • Filename: rpm.xml
  • ID: sec.rpm

RPM (RPM Package Manager) is used for managing software packages. Its main commands are rpm and rpmbuild. The powerful RPM database can be queried by the users, system administrators and package builders for detailed information about the installed software.

Essentially, rpm has five modes: installing, uninstalling (or updating) software packages, rebuilding the RPM database, querying RPM bases or individual RPM archives, integrity checking of packages and signing packages. rpmbuild can be used to build installable packages from pristine sources.

Installable RPM archives are packed in a special binary format. These archives consist of the program files to install and certain meta information used during the installation by rpm to configure the software package or stored in the RPM database for documentation purposes. RPM archives normally have the extension .rpm.

Tip
Tip: Software Development Packages

For several packages, the components needed for software development (libraries, headers, include files, etc.) have been put into separate packages. These development packages are only needed if you want to compile software yourself (for example, the most recent GNOME packages). They can be identified by the name extension -devel, such as the packages alsa-devel and gimp-devel.

6.2.1 Verifying Package Authenticity

RPM packages have a GPG signature. To verify the signature of an RPM package, use the command rpm --checksig  PACKAGE-1.2.3.rpm to determine whether the package originates from SUSE or from another trustworthy facility. This is especially recommended for update packages from the Internet.

While fixing issues in the operating system, you might need to install a Problem Temporary Fix (PTF) into a production system. The packages provided by SUSE are signed against a special PTF key. However, in contrast to SUSE Linux Enterprise 11, this key is not imported by default on SUSE Linux Enterprise 12 systems. To manually import the key, use the following command:

tux > sudo rpm --import \
/usr/share/doc/packages/suse-build-key/suse_ptf_key.asc

After importing the key, you can install PTF packages on your system.

6.2.2 Managing Packages: Install, Update, and Uninstall

Normally, the installation of an RPM archive is quite simple: rpm -i PACKAGE.rpm. With this command the package is installed, but only if its dependencies are fulfilled and if there are no conflicts with other packages. With an error message, rpm requests those packages that need to be installed to meet dependency requirements. In the background, the RPM database ensures that no conflicts arise—a specific file can only belong to one package. By choosing different options, you can force rpm to ignore these defaults, but this is only for experts. Otherwise, you risk compromising the integrity of the system and possibly jeopardize the ability to update the system.

The options -U or --upgrade and -F or --freshen can be used to update a package (for example, rpm -F PACKAGE.rpm). This command removes the files of the old version and immediately installs the new files. The difference between the two versions is that -U installs packages that previously did not exist in the system, while -F merely updates previously installed packages. When updating, rpm updates configuration files carefully using the following strategy:

  • If a configuration file was not changed by the system administrator, rpm installs the new version of the appropriate file. No action by the system administrator is required.

  • If a configuration file was changed by the system administrator before the update, rpm saves the changed file with the extension .rpmorig or .rpmsave (backup file) and installs the version from the new package. This is done only if the originally installed file and the newer version are different. If this is the case, compare the backup file (.rpmorig or .rpmsave) with the newly installed file and make your changes again in the new file. Afterward, delete all .rpmorig and .rpmsave files to avoid problems with future updates.

  • .rpmnew files appear if the configuration file already exists and if the noreplace label was specified in the .spec file.

Following an update, .rpmsave and .rpmnew files should be removed after comparing them, so they do not obstruct future updates. The .rpmorig extension is assigned if the file has not previously been recognized by the RPM database.

Otherwise, .rpmsave is used. In other words, .rpmorig results from updating from a foreign format to RPM. .rpmsave results from updating from an older RPM to a newer RPM. .rpmnew does not disclose any information to whether the system administrator has made any changes to the configuration file. A list of these files is available in /var/adm/rpmconfigcheck. Some configuration files (like /etc/httpd/httpd.conf) are not overwritten to allow continued operation.

The -U switch is not just an equivalent to uninstalling with the -e option and installing with the -i option. Use -U whenever possible.

To remove a package, enter rpm -e PACKAGE. This command only deletes the package if there are no unresolved dependencies. It is theoretically impossible to delete Tcl/Tk, for example, as long as another application requires it. Even in this case, RPM calls for assistance from the database. If such a deletion is, for whatever reason, impossible (even if no additional dependencies exist), it may be helpful to rebuild the RPM database using the option --rebuilddb.

6.2.3 Delta RPM Packages

Delta RPM packages contain the difference between an old and a new version of an RPM package. Applying a delta RPM onto an old RPM results in a completely new RPM. It is not necessary to have a copy of the old RPM because a delta RPM can also work with an installed RPM. The delta RPM packages are even smaller in size than patch RPMs, which is an advantage when transferring update packages over the Internet. The drawback is that update operations with delta RPMs involved consume considerably more CPU cycles than plain or patch RPMs.

The makedeltarpm and applydelta binaries are part of the delta RPM suite (package deltarpm) and help you create and apply delta RPM packages. With the following commands, you can create a delta RPM called new.delta.rpm. The following command assumes that old.rpm and new.rpm are present:

tux > sudo makedeltarpm old.rpm new.rpm new.delta.rpm

Using applydeltarpm, you can reconstruct the new RPM from the file system if the old package is already installed:

tux > sudo applydeltarpm new.delta.rpm new.rpm

To derive it from the old RPM without accessing the file system, use the -r option:

tux > sudo applydeltarpm -r old.rpm new.delta.rpm new.rpm

See /usr/share/doc/packages/deltarpm/README for technical details.

6.2.4 RPM Queries

With the -q option rpm initiates queries, making it possible to inspect an RPM archive (by adding the option -p) and to query the RPM database of installed packages. Several switches are available to specify the type of information required. See Table 6.1, “The Most Important RPM Query Options”.

Table 6.1: The Most Important RPM Query Options

-i

Package information

-l

File list

-f FILE

Query the package that contains the file FILE (the full path must be specified with FILE)

-s

File list with status information (implies -l)

-d

List only documentation files (implies -l)

-c

List only configuration files (implies -l)

--dump

File list with complete details (to be used with -l, -c, or -d)

--provides

List features of the package that another package can request with --requires

--requires, -R

Capabilities the package requires

--scripts

Installation scripts (preinstall, postinstall, uninstall)

For example, the command rpm -q -i wget displays the information shown in Example 6.2, “rpm -q -i wget.

Example 6.2: rpm -q -i wget
Name        : wget
Version     : 1.14
Release     : 17.1
Architecture: x86_64
Install Date: Mon 30 Jan 2017 14:01:29 CET
Group       : Productivity/Networking/Web/Utilities
Size        : 2046483
License     : GPL-3.0+
Signature   : RSA/SHA256, Thu 08 Dec 2016 07:48:44 CET, Key ID 70af9e8139db7c82
Source RPM  : wget-1.14-17.1.src.rpm
Build Date  : Thu 08 Dec 2016 07:48:34 CET
Build Host  : sheep09
Relocations : (not relocatable)
Packager    : https://www.suse.com/
Vendor      : SUSE LLC <https://www.suse.com/>
URL         : http://www.gnu.org/software/wget/
Summary     : A Tool for Mirroring FTP and HTTP Servers
Description :
Wget enables you to retrieve WWW documents or FTP files from a server.
This can be done in script files or via the command line.
Distribution: SUSE Linux Enterprise 12

The option -f only works if you specify the complete file name with its full path. Provide as many file names as desired. For example:

tux > rpm -q -f /bin/rpm /usr/bin/wget
rpm-4.11.2-15.1.x86_64
wget-1.14-17.1.x86_64

If only part of the file name is known, use a shell script as shown in Example 6.3, “Script to Search for Packages”. Pass the partial file name to the script shown as a parameter when running it.

Example 6.3: Script to Search for Packages
#! /bin/sh
for i in $(rpm -q -a -l | grep $1); do
    echo "\"$i\" is in package:"
    rpm -q -f $i
    echo ""
done

The command rpm -q --changelog PACKAGE displays a detailed list of change information about a specific package, sorted by date.

With the installed RPM database, verification checks can be made. Initiate these with -V, or --verify. With this option, rpm shows all files in a package that have been changed since installation. rpm uses eight character symbols to give some hints about the following changes:

Table 6.2: RPM Verify Options

5

MD5 check sum

S

File size

L

Symbolic link

T

Modification time

D

Major and minor device numbers

U

Owner

G

Group

M

Mode (permissions and file type)

In the case of configuration files, the letter c is printed. For example, for changes to /etc/wgetrc (wget package):

tux > rpm -V wget
S.5....T c /etc/wgetrc

The files of the RPM database are placed in /var/lib/rpm. If the partition /usr has a size of 1 GB, this database can occupy nearly 30 MB, especially after a complete update. If the database is much larger than expected, it is useful to rebuild the database with the option --rebuilddb. Before doing this, make a backup of the old database. The cron script cron.daily makes daily copies of the database (packed with gzip) and stores them in /var/adm/backup/rpmdb. The number of copies is controlled by the variable MAX_RPMDB_BACKUPS (default: 5) in /etc/sysconfig/backup. The size of a single backup is approximately 1 MB for 1 GB in /usr.

6.2.5 Installing and Compiling Source Packages

All source packages carry a .src.rpm extension (source RPM).

Note
Note: Installed Source Packages

Source packages can be copied from the installation medium to the hard disk and unpacked with YaST. They are not, however, marked as installed ([i]) in the package manager. This is because the source packages are not entered in the RPM database. Only installed operating system software is listed in the RPM database. When you install a source package, only the source code is added to the system.

The following directories must be available for rpm and rpmbuild in /usr/src/packages (unless you specified custom settings in a file like /etc/rpmrc):

SOURCES

for the original sources (.tar.bz2 or .tar.gz files, etc.) and for distribution-specific adjustments (mostly .diff or .patch files)

SPECS

for the .spec files, similar to a meta Makefile, which control the build process

BUILD

all the sources are unpacked, patched and compiled in this directory

RPMS

where the completed binary packages are stored

SRPMS

here are the source RPMs

When you install a source package with YaST, all the necessary components are installed in /usr/src/packages: the sources and the adjustments in SOURCES and the relevant .spec file in SPECS.

Warning
Warning: System Integrity

Do not experiment with system components (glibc, rpm, etc.), because this endangers the stability of your system.

The following example uses the wget.src.rpm package. After installing the source package, you should have files similar to those in the following list:

/usr/src/packages/SOURCES/wget-1.11.4.tar.bz2
/usr/src/packages/SOURCES/wgetrc.patch
/usr/src/packages/SPECS/wget.spec

rpmbuild -bX /usr/src/packages/SPECS/wget.spec starts the compilation. X is a wild card for various stages of the build process (see the output of --help or the RPM documentation for details). The following is merely a brief explanation:

-bp

Prepare sources in /usr/src/packages/BUILD: unpack and patch.

-bc

Do the same as -bp, but with additional compilation.

-bi

Do the same as -bp, but with additional installation of the built software. Caution: if the package does not support the BuildRoot feature, you might overwrite configuration files.

-bb

Do the same as -bi, but with the additional creation of the binary package. If the compile was successful, the binary should be in /usr/src/packages/RPMS.

-ba

Do the same as -bb, but with the additional creation of the source RPM. If the compilation was successful, the binary should be in /usr/src/packages/SRPMS.

--short-circuit

Skip some steps.

The binary RPM created can now be installed with rpm -i or, preferably, with rpm -U. Installation with rpm makes it appear in the RPM database.

Keep in mind, the BuildRoot directive in the spec file is deprecated since SUSE Linux Enterprise Desktop 12. If you still need this feature, use the --buildroot option as a workaround.For a more detailed background, see the support database at https://www.suse.com/support/kb/doc?id=7017104.

6.2.6 Compiling RPM Packages with build

The danger with many packages is that unwanted files are added to the running system during the build process. To prevent this use build, which creates a defined environment in which the package is built. To establish this chroot environment, the build script must be provided with a complete package tree. This tree can be made available on the hard disk, via NFS, or from DVD. Set the position with build --rpms DIRECTORY. Unlike rpm, the build command looks for the .spec file in the source directory. To build wget (like in the above example) with the DVD mounted in the system under /media/dvd, use the following commands as root:

root # cd /usr/src/packages/SOURCES/
root # mv ../SPECS/wget.spec .
root # build --rpms /media/dvd/suse/ wget.spec

Subsequently, a minimum environment is established at /var/tmp/build-root. The package is built in this environment. Upon completion, the resulting packages are located in /var/tmp/build-root/usr/src/packages/RPMS.

The build script offers several additional options. For example, cause the script to prefer your own RPMs, omit the initialization of the build environment or limit the rpm command to one of the above-mentioned stages. Access additional information with build --help and by reading the build man page.

6.2.7 Tools for RPM Archives and the RPM Database

Midnight Commander (mc) can display the contents of RPM archives and copy parts of them. It represents archives as virtual file systems, offering all usual menu options of Midnight Commander. Display the HEADER with F3. View the archive structure with the cursor keys and Enter. Copy archive components with F5.

A full-featured package manager is available as a YaST module. For details, see Chapter 10, Installing or Removing Software.

7 System Recovery and Snapshot Management with Snapper

  • Filename: snapper.xml
  • ID: cha.snapper
Abstract

Being able to do file system snapshots providing the ability to do rollbacks on Linux is a feature that was often requested in the past. Snapper, with the Btrfs file system or thin-provisioned LVM volumes now fills that gap.

Btrfs, a new copy-on-write file system for Linux, supports file system snapshots (a copy of the state of a subvolume at a certain point of time) of subvolumes (one or more separately mountable file systems within each physical partition). Snapshots are also supported on thin-provisioned LVM volumes formatted with XFS, Ext4 or Ext3. Snapper lets you create and manage these snapshots. It comes with a command line and a YaST interface. Starting with SUSE Linux Enterprise Server 12 it is also possible to boot from Btrfs snapshots—see Section 7.3, “System Rollback by Booting from Snapshots” for more information.

Using Snapper you can perform the following tasks:

7.1 Default Setup

Snapper on SUSE Linux Enterprise Desktop is set up to serve as an undo and recovery tool for system changes. By default, the root partition (/) of SUSE Linux Enterprise Desktop is formatted with Btrfs. Taking snapshots is automatically enabled if the root partition (/) is big enough (approximately more than 16 GB). Taking snapshots on partitions other than / is not enabled by default.

Tip
Tip: Enabling Snapper in the Installed System

If you disabled Snapper during the installation, you can enable it at any time later. To do so, create a default Snapper configuration for the root file system by running

tux > sudo snapper -c root create-config /

Afterward enable the different snapshot types as described in Section 7.1.3.1, “Disabling/Enabling Snapshots”.

Keep in mind that snapshots require a Btrfs root file system with subvolumes set up as proposed by the installer and a partition size of at least 16 GB.

When a snapshot is created, both the snapshot and the original point to the same blocks in the file system. So, initially a snapshot does not occupy additional disk space. If data in the original file system is modified, changed data blocks are copied while the old data blocks are kept for the snapshot. Therefore, a snapshot occupies the same amount of space as the data modified. So, over time, the amount of space a snapshot allocates, constantly grows. As a consequence, deleting files from a Btrfs file system containing snapshots may not free disk space!

Note
Note: Snapshot Location

Snapshots always reside on the same partition or subvolume on which the snapshot has been taken. It is not possible to store snapshots on a different partition or subvolume.

As a result, partitions containing snapshots need to be larger than normal partitions. The exact amount strongly depends on the number of snapshots you keep and the amount of data modifications. As a rule of thumb you should consider using twice the size than you normally would. To prevent disks from running out of space, old snapshots are automatically cleaned up. Refer to Section 7.1.3.4, “Controlling Snapshot Archiving” for details.

7.1.1 Types of Snapshots

Although snapshots themselves do not differ in a technical sense, we distinguish between three types of snapshots, based on the events that trigger them:

Timeline Snapshots

A single snapshot is created every hour. Old snapshots are automatically deleted. By default, the first snapshot of the last ten days, months, and years are kept. Timeline snapshots are disabled by default.

Installation Snapshots

Whenever one or more packages are installed with YaST or Zypper, a pair of snapshots is created: one before the installation starts (Pre) and another one after the installation has finished (Post). In case an important system component such as the kernel has been installed, the snapshot pair is marked as important (important=yes). Old snapshots are automatically deleted. By default the last ten important snapshots and the last ten regular (including administration snapshots) snapshots are kept. Installation snapshots are enabled by default.

Administration Snapshots

Whenever you administrate the system with YaST, a pair of snapshots is created: one when a YaST module is started (Pre) and another when the module is closed (Post). Old snapshots are automatically deleted. By default the last ten important snapshots and the last ten regular snapshots (including installation snapshots) are kept. Administration snapshots are enabled by default.

7.1.2 Directories That Are Excluded from Snapshots

Some directories need to be excluded from snapshots for different reasons. The following list shows all directories that are excluded:

/boot/grub2/i386-pc, /boot/grub2/x86_64-efi, /boot/grub2/powerpc-ieee1275, /boot/grub2/s390x-emu

A rollback of the boot loader configuration is not supported. The directories listed above are architecture-specific. The first two directories are present on AMD64/Intel 64 machines, the latter two on IBM POWER and on IBM z Systems, respectively.

/home

If /home does not reside on a separate partition, it is excluded to avoid data loss on rollbacks.

/opt, /var/opt

Third-party products usually get installed to /opt. It is excluded to avoid uninstalling these applications on rollbacks.

/srv

Contains data for Web and FTP servers. It is excluded to avoid data loss on rollbacks.

/tmp, /var/tmp, /var/cache, /var/crash

All directories containing temporary files and caches are excluded from snapshots.

/usr/local

This directory is used when manually installing software. It is excluded to avoid uninstalling these installations on rollbacks.

/var/lib/libvirt/images

The default location for virtual machine images managed with libvirt. Excluded to ensure virtual machine images are not replaced with older versions during a rollback. By default, this subvolume is created with the option no copy on write.

/var/lib/mailman, /var/spool

Directories containing mails or mail queues are excluded to avoid a loss of mails after a rollback.

/var/lib/named

Contains zone data for the DNS server. Excluded from snapshots to ensure a name server can operate after a rollback.

/var/lib/mariadb, /var/lib/mysql, /var/lib/pgqsl

These directories contain database data. By default, these subvolumes are created with the option no copy on write.

/var/log

Log file location. Excluded from snapshots to allow log file analysis after the rollback of a broken system.

7.1.3 Customizing the Setup

SUSE Linux Enterprise Desktop comes with a reasonable default setup, which should be sufficient for most use cases. However, all aspects of taking automatic snapshots and snapshot keeping can be configured according to your needs.

7.1.3.1 Disabling/Enabling Snapshots

Each of the three snapshot types (timeline, installation, administration) can be enabled or disabled independently.

Disabling/Enabling Timeline Snapshots

Enabling.  snapper-c root set-config "TIMELINE_CREATE=yes"

Disabling.  snapper -c root set-config "TIMELINE_CREATE=no"

Timeline snapshots are enabled by default, except for the root partition.

Disabling/Enabling Installation Snapshots

Enabling:  Install the package snapper-zypp-plugin

Disabling:  Uninstall the package snapper-zypp-plugin

Installation snapshots are enabled by default.

Disabling/Enabling Administration Snapshots

Enabling:  Set USE_SNAPPER to yes in /etc/sysconfig/yast2.

Disabling:  Set USE_SNAPPER to no in /etc/sysconfig/yast2.

Administration snapshots are enabled by default.

7.1.3.2 Controlling Installation Snapshots

Taking snapshot pairs upon installing packages with YaST or Zypper is handled by the snapper-zypp-plugin. An XML configuration file, /etc/snapper/zypp-plugin.conf defines, when to make snapshots. By default the file looks like the following:

 1 <?xml version="1.0" encoding="utf-8"?>
 2 <snapper-zypp-plugin-conf>
 3  <solvables>
 4   <solvable match="w"1 important="true"2>kernel-*3</solvable>
 5   <solvable match="w" important="true">dracut</solvable>
 6   <solvable match="w" important="true">glibc</solvable>
 7   <solvable match="w" important="true">systemd*</solvable>
 8   <solvable match="w" important="true">udev</solvable>
 9   <solvable match="w">*</solvable>4
10  </solvables>
11 </snapper-zypp-plugin-conf>

1

The match attribute defines whether the pattern is a Unix shell-style wild card (w) or a Python regular expression (re).

2

If the given pattern matches and the corresponding package is marked as important (for example kernel packages), the snapshot will also be marked as important.

3

Pattern to match a package name. Based on the setting of the match attribute, special characters are either interpreted as shell wild cards or regular expressions. This pattern matches all package names starting with kernel-.

4

This line unconditionally matches all packages.

With this configuration snapshot, pairs are made whenever a package is installed (line 9). When the kernel, dracut, glibc, systemd, or udev packages marked as important are installed, the snapshot pair will also be marked as important (lines 4 to 8). All rules are evaluated.

To disable a rule, either delete it or deactivate it using XML comments. To prevent the system from making snapshot pairs for every package installation for example, comment line 9:

 1 <?xml version="1.0" encoding="utf-8"?>
 2 <snapper-zypp-plugin-conf>
 3  <solvables>
 4   <solvable match="w" important="true">kernel-*</solvable>
 5   <solvable match="w" important="true">dracut</solvable>
 6   <solvable match="w" important="true">glibc</solvable>
 7   <solvable match="w" important="true">systemd*</solvable>
 8   <solvable match="w" important="true">udev</solvable>
 9   <!-- <solvable match="w">*</solvable> -->
10  </solvables>
11 </snapper-zypp-plugin-conf>

7.1.3.3 Creating and Mounting New Subvolumes

Creating a new subvolume underneath the / hierarchy and permanently mounting it is supported. Such a subvolume will be excluded from snapshots. You need to make sure not to create it inside an existing snapshot, since you would not be able to delete snapshots anymore after a rollback.

SUSE Linux Enterprise Desktop is configured with the /@/ subvolume which serves as an independent root for permanent subvolumes such as /opt, /srv, /home and others. Any new subvolumes you create and permanently mount need to be created in this initial root file system.

To do so, run the following commands. In this example, a new subvolume /usr/important is created from /dev/sda2.

tux > sudo mount /dev/sda2 -o subvol=@ /mnt
tux > sudo btrfs subvolume create /mnt/usr/important
tux > sudo umount /mnt

The corresponding entry in /etc/fstab needs to look like the following:

/dev/sda2 /usr/important btrfs subvol=@/usr/important 0 0
Tip
Tip: Disable Copy-On-Write (cow)

A subvolume may contain files that constantly change, such as virtualized disk images, database files, or log files. If so, consider disabling the copy-on-write feature for this volume, to avoid duplication of disk blocks. Use the nodatacow mount option in /etc/fstab to do so:

/dev/sda2 /usr/important btrfs nodatacow,subvol=@/usr/important 0 0

To alternatively disable copy-on-write for single files or directories, use the command chattr +C PATH.

7.1.3.4 Controlling Snapshot Archiving

Snapshots occupy disk space. To prevent disks from running out of space and thus causing system outages, old snapshots are automatically deleted. By default, up to ten important installation and administration snapshots and up to ten regular installation and administration snapshots are kept. If these snapshots occupy more than 50% of the root file system size, additional snapshots will be deleted. A minimum of four important and two regular snapshots are always kept.

Refer to Section 7.4.1, “Managing Existing Configurations” for instructions on how to change these values.

7.1.3.5 Using Snapper on Thin-Provisioned LVM Volumes

Apart from snapshots on Btrfs file systems, Snapper also supports taking snapshots on thin-provisioned LVM volumes (snapshots on regular LVM volumes are not supported) formatted with XFS, Ext4 or Ext3. For more information and setup instructions on LVM volumes, refer to Section 9.2, “LVM Configuration”.

To use Snapper on a thin-provisioned LVM volume you need to create a Snapper configuration for it. On LVM it is required to specify the file system with --fstype=lvm(FILESYSTEM). ext3, etx4 or xfs are valid values for FILESYSTEM. Example:

tux > sudo snapper -c lvm create-config --fstype="lvm(xfs)" /thin_lvm

You can adjust this configuration according to your needs as described in Section 7.4.1, “Managing Existing Configurations”.

7.2 Using Snapper to Undo Changes

Snapper on SUSE Linux Enterprise Desktop is preconfigured to serve as a tool that lets you undo changes made by zypper and YaST. For this purpose, Snapper is configured to create a pair of snapshots before and after each run of zypper and YaST. Snapper also lets you restore system files that have been accidentally deleted or modified. Timeline snapshots for the root partition need to be enabled for this purpose—see Section 7.1.3.1, “Disabling/Enabling Snapshots” for details.

By default, automatic snapshots as described above are configured for the root partition and its subvolumes. To make snapshots available for other partitions such as /home for example, you can create custom configurations.

Important
Important: Undoing Changes Compared to Rollback

When working with snapshots to restore data, it is important to know that there are two fundamentally different scenarios Snapper can handle:

Undoing Changes

When undoing changes as described in the following, two snapshots are being compared and the changes between these two snapshots are made undone. Using this method also allows to explicitly select the files that should be restored.

Rollback

When doing rollbacks as described in Section 7.3, “System Rollback by Booting from Snapshots”, the system is reset to the state at which the snapshot was taken.

When undoing changes, it is also possible to compare a snapshot against the current system. When restoring all files from such a comparison, this will have the same result as doing a rollback. However, using the method described in Section 7.3, “System Rollback by Booting from Snapshots” for rollbacks should be preferred, since it is faster and allows you to review the system before doing the rollback.

Warning
Warning: Data Consistency

There is no mechanism to ensure data consistency when creating a snapshot. Whenever a file (for example, a database) is written at the same time as the snapshot is being created, it will result in a corrupted or partly written file. Restoring such a file will cause problems. Furthermore, some system files such as /etc/mtab must never be restored. Therefore it is strongly recommended to always closely review the list of changed files and their diffs. Only restore files that really belong to the action you want to revert.

7.2.1 Undoing YaST and Zypper Changes

If you set up the root partition with Btrfs during the installation, Snapper—preconfigured for doing rollbacks of YaST or Zypper changes—will automatically be installed. Every time you start a YaST module or a Zypper transaction, two snapshots are created: a pre-snapshot capturing the state of the file system before the start of the module and a post-snapshot after the module has been finished.

Using the YaST Snapper module or the snapper command line tool, you can undo the changes made by YaST/Zypper by restoring files from the pre-snapshot. Comparing two snapshots the tools also allow you to see which files have been changed. You can also display the differences between two versions of a file (diff).

Procedure 7.1: Undoing Changes Using the YaST Snapper Module
  1. Start the Snapper module from the Miscellaneous section in YaST or by entering yast2 snapper.

  2. Make sure Current Configuration is set to root. This is always the case unless you have manually added own Snapper configurations.

  3. Choose a pair of pre- and post-snapshots from the list. Both, YaST and Zypper snapshot pairs are of the type Pre & Post. YaST snapshots are labeled as zypp(y2base) in the Description column; Zypper snapshots are labeled zypp(zypper).

  4. Click Show Changes to open the list of files that differ between the two snapshots.

  5. Review the list of files. To display a diff between the pre- and post-version of a file, select it from the list.

  6. To restore one or more files, select the relevant files or directories by activating the respective check box. Click Restore Selected and confirm the action by clicking Yes.

    To restore a single file, activate its diff view by clicking its name. Click Restore From First and confirm your choice with Yes.

Procedure 7.2: Undoing Changes Using the snapper Command
  1. Get a list of YaST and Zypper snapshots by running snapper list -t pre-post. YaST snapshots are labeled as yast MODULE_NAME in the Description column; Zypper snapshots are labeled zypp(zypper).

    tux > sudo snapper list -t pre-post
    Pre # | Post # | Pre Date                      | Post Date                     | Description
    ------+--------+-------------------------------+-------------------------------+--------------
    311   | 312    | Tue 06 May 2014 14:05:46 CEST | Tue 06 May 2014 14:05:52 CEST | zypp(y2base)
    340   | 341    | Wed 07 May 2014 16:15:10 CEST | Wed 07 May 2014 16:15:16 CEST | zypp(zypper)
    342   | 343    | Wed 07 May 2014 16:20:38 CEST | Wed 07 May 2014 16:20:42 CEST | zypp(y2base)
    344   | 345    | Wed 07 May 2014 16:21:23 CEST | Wed 07 May 2014 16:21:24 CEST | zypp(zypper)
    346   | 347    | Wed 07 May 2014 16:41:06 CEST | Wed 07 May 2014 16:41:10 CEST | zypp(y2base)
    348   | 349    | Wed 07 May 2014 16:44:50 CEST | Wed 07 May 2014 16:44:53 CEST | zypp(y2base)
    350   | 351    | Wed 07 May 2014 16:46:27 CEST | Wed 07 May 2014 16:46:38 CEST | zypp(y2base)
  2. Get a list of changed files for a snapshot pair with snapper status PRE..POST. Files with content changes are marked with c, files that have been added are marked with + and deleted files are marked with -.

    tux > sudo snapper status 350..351
    +..... /usr/share/doc/packages/mikachan-fonts
    +..... /usr/share/doc/packages/mikachan-fonts/COPYING
    +..... /usr/share/doc/packages/mikachan-fonts/dl.html
    c..... /usr/share/fonts/truetype/fonts.dir
    c..... /usr/share/fonts/truetype/fonts.scale
    +..... /usr/share/fonts/truetype/みかちゃん-p.ttf
    +..... /usr/share/fonts/truetype/みかちゃん-pb.ttf
    +..... /usr/share/fonts/truetype/みかちゃん-ps.ttf
    +..... /usr/share/fonts/truetype/みかちゃん.ttf
    c..... /var/cache/fontconfig/7ef2298fde41cc6eeb7af42e48b7d293-x86_64.cache-4
    c..... /var/lib/rpm/Basenames
    c..... /var/lib/rpm/Dirnames
    c..... /var/lib/rpm/Group
    c..... /var/lib/rpm/Installtid
    c..... /var/lib/rpm/Name
    c..... /var/lib/rpm/Packages
    c..... /var/lib/rpm/Providename
    c..... /var/lib/rpm/Requirename
    c..... /var/lib/rpm/Sha1header
    c..... /var/lib/rpm/Sigmd5
  3. To display the diff for a certain file, run snapper diff PRE..POST FILENAME. If you do not specify FILENAME, a diff for all files will be displayed.

    tux > sudo snapper diff 350..351 /usr/share/fonts/truetype/fonts.scale
    --- /.snapshots/350/snapshot/usr/share/fonts/truetype/fonts.scale       2014-04-23 15:58:57.000000000 +0200
    +++ /.snapshots/351/snapshot/usr/share/fonts/truetype/fonts.scale       2014-05-07 16:46:31.000000000 +0200
    @@ -1,4 +1,4 @@
    -1174
    +1486
     ds=y:ai=0.2:luximr.ttf -b&h-luxi mono-bold-i-normal--0-0-0-0-c-0-iso10646-1
     ds=y:ai=0.2:luximr.ttf -b&h-luxi mono-bold-i-normal--0-0-0-0-c-0-iso8859-1
    [...]
  4. To restore one or more files run snapper -v undochange PRE..POST FILENAMES. If you do not specify a FILENAMES, all changed files will be restored.

    tux > sudo snapper -v undochange 350..351
         create:0 modify:13 delete:7
         undoing change...
         deleting /usr/share/doc/packages/mikachan-fonts
         deleting /usr/share/doc/packages/mikachan-fonts/COPYING
         deleting /usr/share/doc/packages/mikachan-fonts/dl.html
         deleting /usr/share/fonts/truetype/みかちゃん-p.ttf
         deleting /usr/share/fonts/truetype/みかちゃん-pb.ttf
         deleting /usr/share/fonts/truetype/みかちゃん-ps.ttf
         deleting /usr/share/fonts/truetype/みかちゃん.ttf
         modifying /usr/share/fonts/truetype/fonts.dir
         modifying /usr/share/fonts/truetype/fonts.scale
         modifying /var/cache/fontconfig/7ef2298fde41cc6eeb7af42e48b7d293-x86_64.cache-4
         modifying /var/lib/rpm/Basenames
         modifying /var/lib/rpm/Dirnames
         modifying /var/lib/rpm/Group
         modifying /var/lib/rpm/Installtid
         modifying /var/lib/rpm/Name
         modifying /var/lib/rpm/Packages
         modifying /var/lib/rpm/Providename
         modifying /var/lib/rpm/Requirename
         modifying /var/lib/rpm/Sha1header
         modifying /var/lib/rpm/Sigmd5
         undoing change done
Warning
Warning: Reverting User Additions

Reverting user additions via undoing changes with Snapper is not recommended. Since certain directories are excluded from snapshots, files belonging to these users will remain in the file system. If a user with the same user ID as a deleted user is created, this user will inherit the files. Therefore it is strongly recommended to use the YaST User and Group Management tool to remove users.

7.2.2 Using Snapper to Restore Files

Apart from the installation and administration snapshots, Snapper creates timeline snapshots. You can use these backup snapshots to restore files that have accidentally been deleted or to restore a previous version of a file. By using Snapper's diff feature you can also find out which modifications have been made at a certain point of time.

Being able to restore files is especially interesting for data, which may reside on subvolumes or partitions for which snapshots are not taken by default. To be able to restore files from home directories, for example, create a separate Snapper configuration for /home doing automatic timeline snapshots. See Section 7.4, “Creating and Modifying Snapper Configurations” for instructions.

Warning
Warning: Restoring Files Compared to Rollback

Snapshots taken from the root file system (defined by Snapper's root configuration), can be used to do a system rollback. The recommended way to do such a rollback is to boot from the snapshot and then perform the rollback. See Section 7.3, “System Rollback by Booting from Snapshots” for details.

Performing a rollback would also be possible by restoring all files from a root file system snapshot as described below. However, this is not recommended. You may restore single files, for example a configuration file from the /etc directory, but not the complete list of files from the snapshot.

This restriction only affects snapshots taken from the root file system!

Procedure 7.3: Restoring Files Using the YaST Snapper Module
  1. Start the Snapper module from the Miscellaneous section in YaST or by entering yast2 snapper.

  2. Choose the Current Configuration from which to choose a snapshot.

  3. Select a timeline snapshot from which to restore a file and choose Show Changes. Timeline snapshots are of the type Single with a description value of timeline.

  4. Select a file from the text box by clicking the file name. The difference between the snapshot version and the current system is shown. Activate the check box to select the file for restore. Do so for all files you want to restore.

  5. Click Restore Selected and confirm the action by clicking Yes.

Procedure 7.4: Restoring Files Using the snapper Command
  1. Get a list of timeline snapshots for a specific configuration by running the following command:

    tux > sudo snapper -c CONFIG list -t single | grep timeline

    CONFIG needs to be replaced by an existing Snapper configuration. Use snapper list-configs to display a list.

  2. Get a list of changed files for a given snapshot by running the following command:

    tux > sudo snapper -c CONFIG status SNAPSHOT_ID..0

    Replace SNAPSHOT_ID by the ID for the snapshot from which you want to restore the file(s).

  3. Optionally list the differences between the current file version and the one from the snapshot by running

    tux > sudo snapper -c CONFIG diff SNAPSHOT_ID..0 FILE NAME

    If you do not specify <FILE NAME>, the difference for all files are shown.

  4. To restore one or more files, run

    tux > sudo snapper -c CONFIG -v undochange SNAPSHOT_ID..0 FILENAME1 FILENAME2

    If you do not specify file names, all changed files will be restored.

7.3 System Rollback by Booting from Snapshots

The GRUB 2 version included on SUSE Linux Enterprise Desktop can boot from Btrfs snapshots. Together with Snapper's rollback feature, this allows to recover a misconfigured system. Only snapshots created for the default Snapper configuration (root) are bootable.

Important
Important: Supported Configuration

As of SUSE Linux Enterprise Desktop 12 SP3 system rollbacks are only supported if the default subvolume configuration of the root partition has not been changed.

When booting a snapshot, the parts of the file system included in the snapshot are mounted read-only; all other file systems and parts that are excluded from snapshots are mounted read-write and can be modified.

Important
Important: Undoing Changes Compared to Rollback

When working with snapshots to restore data, it is important to know that there are two fundamentally different scenarios Snapper can handle:

Undoing Changes

When undoing changes as described in Section 7.2, “Using Snapper to Undo Changes”, two snapshots are compared and the changes between these two snapshots are reverted. Using this method also allows to explicitly exclude selected files from being restored.

Rollback

When doing rollbacks as described in the following, the system is reset to the state at which the snapshot was taken.

To do a rollback from a bootable snapshot, the following requirements must be met. When doing a default installation, the system is set up accordingly.

Requirements for a Rollback from a Bootable Snapshot
  • The root file system needs to be Btrfs. Booting from LVM volume snapshots is not supported.

  • The root file system needs to be on a single device, a single partition and a single subvolume. Directories that are excluded from snapshots such as /srv (see Section 7.1.2, “Directories That Are Excluded from Snapshots” for a full list) may reside on separate partitions.

  • The system needs to be bootable via the installed boot loader.

To perform a rollback from a bootable snapshot, do as follows:

  1. Boot the system. In the boot menu choose Bootable snapshots and select the snapshot you want to boot. The list of snapshots is listed by date—the most recent snapshot is listed first.

  2. Log in to the system. Carefully check whether everything works as expected. Note that you cannot write to any directory that is part of the snapshot. Data you write to other directories will not get lost, regardless of what you do next.

  3. Depending on whether you want to perform the rollback or not, choose your next step:

    1. If the system is in a state where you do not want to do a rollback, reboot to boot into the current system state. You can then choose a different snapshot, or start the rescue system.

    2. To perform the rollback, run

      tux > sudo snapper rollback

      and reboot afterward. On the boot screen, choose the default boot entry to reboot into the reinstated system. A snapshot of the file system status before the rollback is created. The default subvolume for root will be replaced with a fresh read-write snapshot. For details, see Section 7.3.1, “Snapshots after Rollback”.

      It is useful to add a description for the snapshot with the -d option. For example:

      New file system root since rollback on DATE TIME
Tip
Tip: Rolling Back to a Specific Installation State

If snapshots are not disabled during installation, an initial bootable snapshot is created at the end of the initial system installation. You can go back to that state at any time by booting this snapshot. The snapshot can be identified by the description after installation.

A bootable snapshot is also created when starting a system upgrade to a service pack or a new major release (provided snapshots are not disabled).

7.3.1 Snapshots after Rollback

Before a rollback is performed, a snapshot of the running file system is created. The description references the ID of the snapshot that was restored in the rollback.

Snapshots created by rollbacks receive the value number for the Cleanup attribute. The rollback snapshots are therefore automatically deleted when the set number of snapshots is reached. Refer to Section 7.6, “Automatic Snapshot Clean-Up” for details. If the snapshot contains important data, extract the data from the snapshot before it is removed.

7.3.1.1 Example of Rollback Snapshot

For example, after a fresh installation the following snapshots are available on the system:

root # snapper --iso list
Type   | # |     | Cleanup | Description           | Userdata
-------+---+ ... +---------+-----------------------+--------------
single | 0 |     |         | current               |
single | 1 |     |         | first root filesystem |
single | 2 |     | number  | after installation    | important=yes

After running sudo snapper rollback snapshot 3 is created and contains the state of the system before the rollback was executed. Snapshot 4 is the new default Btrfs subvolume and thus the system after a reboot.

root # snapper --iso list
Type   | # |     | Cleanup | Description           | Userdata
-------+---+ ... +---------+-----------------------+--------------
single | 0 |     |         | current               |
single | 1 |     | number  | first root filesystem |
single | 2 |     | number  | after installation    | important=yes
single | 3 |     | number  | rollback backup of #1 | important=yes
single | 4 |     |         |                       |

7.3.2 Accessing and Identifying Snapshot Boot Entries

To boot from a snapshot, reboot your machine and choose Start Bootloader from a read-only snapshot. A screen listing all bootable snapshots opens. The most recent snapshot is listed first, the oldest last. Use the keys and to navigate and press Enter to activate the selected snapshot. Activating a snapshot from the boot menu does not reboot the machine immediately, but rather opens the boot loader of the selected snapshot.

Boot Loader: Snapshots
Figure 7.1: Boot Loader: Snapshots

Each snapshot entry in the boot loader follows a naming scheme which makes it possible to identify it easily:

[*]1OS2 (KERNEL3,DATE4TTIME5,DESCRIPTION6)

1

If the snapshot was marked important, the entry is marked with a *.

2

Operating system label.

4

Date in the format YYYY-MM-DD.

5

Time in the format HH:MM.

6

This field contains a description of the snapshot. In case of a manually created snapshot this is the string created with the option --description or a custom string (see Tip: Setting a Custom Description for Boot Loader Snapshot Entries). In case of an automatically created snapshot, it is the tool that was called, for example zypp(zypper) or yast_sw_single. Long descriptions may be truncated, depending on the size of the boot screen.

Tip
Tip: Setting a Custom Description for Boot Loader Snapshot Entries

It is possible to replace the default string in the description field of a snapshot with a custom string. This is for example useful if an automatically created description is not sufficient, or a user-provided description is too long. To set a custom string STRING for snapshot NUMBER, use the following command:

tux > sudo snapper modify --userdata "bootloader=STRING" NUMBER

The description should be no longer than 25 characters—everything that exceeds this size will not be readable on the boot screen.

7.3.3 Limitations

A complete system rollback, restoring the complete system to the identical state as it was in when a snapshot was taken, is not possible.

7.3.3.1 Directories Excluded from Snapshots

Root file system snapshots do not contain all directories. See Section 7.1.2, “Directories That Are Excluded from Snapshots” for details and reasons. As a general consequence, data from these directories is not restored, resulting in the following limitations.

Add-ons and Third Party Software may be Unusable after a Rollback

Applications and add-ons installing data in subvolumes excluded from the snapshot, such as /opt, may not work after a rollback, if others parts of the application data are also installed on subvolumes included in the snapshot. Re-install the application or the add-on to solve this problem.

File Access Problems

If an application had changed file permissions and/or ownership in between snapshot and current system, the application may not be able to access these files. Reset permissions and/or ownership for the affected files after the rollback.

Incompatible Data Formats

If a service or an application has established a new data format in between snapshot and current system, the application may not be able to read the affected data files after a rollback.

Subvolumes with a Mixture of Code and Data

Subvolumes like /srv may contain a mixture of code and data. A rollback may result in non-functional code. A downgrade of the PHP version, for example, may result in broken PHP scripts for the Web server.

User Data

If a rollback removes users from the system, data that is owned by these users in directories excluded from the snapshot, is not removed. If a user with the same user ID is created, this user will inherit the files. Use a tool like find to locate and remove orphaned files.

7.3.3.2 No Rollback of Boot Loader Data

A rollback of the boot loader is not possible, since all stages of the boot loader must fit together. This cannot be guaranteed when doing rollbacks of /boot.

7.4 Creating and Modifying Snapper Configurations

The way Snapper behaves is defined in a configuration file that is specific for each partition or Btrfs subvolume. These configuration files reside under /etc/snapper/configs/.

In case the root file system is big enough (approximately 12 GB), snapshots are automatically enabled for the root file system / upon installation. The corresponding default configuration is named root. It creates and manages the YaST and Zypper snapshot. See Section 7.4.1.1, “Configuration Data” for a list of the default values.

Note
Note: Minimum Root File System Size for Enabling Snapshots

As explained in Section 7.1, “Default Setup”, enabling snapshots requires additional free space in the root file system. The amount depends on the amount of packages installed and the amount of changes made to the volume that is included in snapshots. The snapshot frequency and the number of snapshots that get archived also matter.

There is a minimum root file system size that is required in order to automatically enable snapshots during the installation. As of SUSE Linux Enterprise Desktop 12 SP3 this size is approximately 12 GB. This value may change in the future, depending on architecture and the size of the base system. It depends on the values for the following tags in the file /control.xml from the installation media:

<root_base_size>
<btrfs_increase_percentage>

It is calculated with the following formula: ROOT_BASE_SIZE * (1 + BTRFS_INCREASE_PERCENTAGE/100)

Keep in mind that this value is a minimum size. Consider using more space for the root file system. As a rule of thumb, double the size you would use when not having enabled snapshots.

You may create your own configurations for other partitions formatted with Btrfs or existing subvolumes on a Btrfs partition. In the following example we will set up a Snapper configuration for backing up the Web server data residing on a separate, Btrfs-formatted partition mounted at /srv/www.

After a configuration has been created, you can either use snapper itself or the YaST Snapper module to restore files from these snapshots. In YaST you need to select your Current Configuration, while you need to specify your configuration for snapper with the global switch -c (for example, snapper -c myconfig list).

To create a new Snapper configuration, run snapper create-config:

tux > sudo snapper -c www-data1 create-config /srv/www2

1

Name of configuration file.

2

Mount point of the partition or Btrfs subvolume on which to take snapshots.

This command will create a new configuration file /etc/snapper/configs/www-data with reasonable default values (taken from /etc/snapper/config-templates/default). Refer to Section 7.4.1, “Managing Existing Configurations” for instructions on how to adjust these defaults.

Tip
Tip: Configuration Defaults

Default values for a new configuration are taken from /etc/snapper/config-templates/default. To use your own set of defaults, create a copy of this file in the same directory and adjust it to your needs. To use it, specify the -t option with the create-config command:

tux > sudo snapper -c www-data create-config -t MY_DEFAULTS /srv/www

7.4.1 Managing Existing Configurations

The snapper offers several subcommands for managing existing configurations. You can list, show, delete and modify them:

List Configurations

Use the command snapper list-configs to get all existing configurations:

tux > sudo snapper list-configs
Config | Subvolume
-------+----------
root   | /
usr    | /usr
local  | /local
Show a Configuration

Use the subcommand snapper -c CONFIG get-config to display the specified configuration. Config needs to be replaced by a configuration name shown by snapper list-configs. See Section 7.4.1.1, “Configuration Data” for more information on the configuration options.

To display the default configuration run

tux > sudo snapper -c root get-config
Modify a Configuration

Use the subcommand snapper -c CONFIG set-config OPTION=VALUE to modify an option in the specified configuration. Config needs to be replaced by a configuration name shown by snapper list-configs. Possible values for OPTION and VALUE are listed in Section 7.4.1.1, “Configuration Data”.

Delete a Configuration

Use the subcommand snapper -c CONFIG delete-config to delete a configuration. Config needs to be replaced by a configuration name shown by snapper list-configs.

7.4.1.1 Configuration Data

Each configuration contains a list of options that can be modified from the command line. The following list provides details for each option. To change a value, run snapper -c CONFIG set-config "KEY=VALUE".

ALLOW_GROUPS, ALLOW_USERS

Granting permissions to use snapshots to regular users. See Section 7.4.1.2, “Using Snapper as Regular User” for more information.

The default value is "".

BACKGROUND_COMPARISON

Defines whether pre and post snapshots should be compared in the background after creation.

The default value is "yes".

EMPTY_*

Defines the clean-up algorithm for snapshots pairs with identical pre and post snapshots. See Section 7.6.3, “Cleaning Up Snapshot Pairs That Do Not Differ” for details.

FSTYPE

File system type of the partition. Do not change.

The default value is "btrfs".

NUMBER_*

Defines the clean-up algorithm for installation and admin snapshots. See Section 7.6.1, “Cleaning Up Numbered Snapshots” for details.

QGROUP / SPACE_LIMIT

Adds quota support to the clean-up algorithms. See Section 7.6.5, “Adding Disk Quota Support” for details.

SUBVOLUME

Mount point of the partition or subvolume to snapshot. Do not change.

The default value is "/".

SYNC_ACL

If Snapper is used by regular users (see Section 7.4.1.2, “Using Snapper as Regular User”), the users must be able to access the .snapshot directories and to read files within them. If SYNC_ACL is set to yes, Snapper automatically makes them accessible using ACLs for users and groups from the ALLOW_USERS or ALLOW_GROUPS entries.

The default value is "no".

TIMELINE_CREATE

If set to yes, hourly snapshots are created. Valid values: yes, no.

The default value is "no".

TIMELINE_CLEANUP / TIMELINE_LIMIT_*

Defines the clean-up algorithm for timeline snapshots. See Section 7.6.2, “Cleaning Up Timeline Snapshots” for details.

7.4.1.2 Using Snapper as Regular User

By default Snapper can only be used by root. However, there are cases in which certain groups or users need to be able to create snapshots or undo changes by reverting to a snapshot:

  • Web site administrators who want to take snapshots of /srv/www

  • Users who want to take a snapshot of their home directory

For these purposes Snapper configurations that grant permissions to users or/and groups can be created. The corresponding .snapshots directory needs to be readable and accessible by the specified users. The easiest way to achieve this is to set the SYNC_ACL option to yes.

Procedure 7.5: Enabling Regular Users to Use Snapper

Note that all steps in this procedure need to be run by root.

  1. If not existing, create a Snapper configuration for the partition or subvolume on which the user should be able to use Snapper. Refer to Section 7.4, “Creating and Modifying Snapper Configurations” for instructions. Example:

    tux > sudo snapper --config web_data create /srv/www
  2. The configuration file is created under /etc/snapper/configs/CONFIG, where CONFIG is the value you specified with -c/--config in the previous step (for example /etc/snapper/configs/web_data). Adjust it according to your needs; see Section 7.4.1, “Managing Existing Configurations” for details.

  3. Set values for ALLOW_USERS and/or ALLOW_GROUPS to grant permissions to users and/or groups, respectively. Multiple entries need to be separated by Space. To grant permissions to the user www_admin for example, run:

    tux > sudo snapper -c web_data set-config "ALLOW_USERS=www_admin" SYNC_ACL="yes"
  4. The given Snapper configuration can now be used by the specified user(s) and/or group(s). You can test it with the list command, for example:

    www_admin:~ > snapper -c web_data list

7.5 Manually Creating and Managing Snapshots

Snapper is not restricted to creating and managing snapshots automatically by configuration; you can also create snapshot pairs (before and after) or single snapshots manually using either the command-line tool or the YaST module.

All Snapper operations are carried out for an existing configuration (see Section 7.4, “Creating and Modifying Snapper Configurations” for details). You can only take snapshots of partitions or volumes for which a configuration exists. By default the system configuration (root) is used. If you want to create or manage snapshots for your own configuration you need to explicitly choose it. Use the Current Configuration drop-down box in YaST or specify the -c on the command line (snapper -c MYCONFIG COMMAND).

7.5.1 Snapshot Metadata

Each snapshot consists of the snapshot itself and some metadata. When creating a snapshot you also need to specify the metadata. Modifying a snapshot means changing its metadata—you cannot modify its content. Use snapper list to show existing snapshots and their metadata:

snapper --config home list

Lists snapshots for the configuration home. To list snapshots for the default configuration (root), use snapper -c root list or snapper list.

snapper list -a

Lists snapshots for all existing configurations.

snapper list -t pre-post

Lists all pre and post snapshot pairs for the default (root) configuration.

snapper list -t single

Lists all snapshots of the type single for the default (root) configuration.

The following metadata is available for each snapshot:

  • Type: Snapshot type, see Section 7.5.1.1, “Snapshot Types” for details. This data cannot be changed.

  • Number: Unique number of the snapshot. This data cannot be changed.

  • Pre Number: Specifies the number of the corresponding pre snapshot. For snapshots of type post only. This data cannot be changed.

  • Description: A description of the snapshot.

  • Userdata: An extended description where you can specify custom data in the form of a comma-separated key=value list: reason=testing, project=foo. This field is also used to mark a snapshot as important (important=yes) and to list the user that created the snapshot (user=tux).

  • Cleanup-Algorithm: Cleanup-algorithm for the snapshot, see Section 7.6, “Automatic Snapshot Clean-Up” for details.

7.5.1.1 Snapshot Types

Snapper knows three different types of snapshots: pre, post, and single. Physically they do not differ, but Snapper handles them differently.

pre

Snapshot of a file system before a modification. Each pre snapshot has got a corresponding post snapshot. Used for the automatic YaST/Zypper snapshots, for example.

post

Snapshot of a file system after a modification. Each post snapshot has got a corresponding pre snapshot. Used for the automatic YaST/Zypper snapshots, for example.

single

Stand-alone snapshot. Used for the automatic hourly snapshots, for example. This is the default type when creating snapshots.

7.5.1.2 Cleanup-algorithms

Snapper provides three algorithms to clean up old snapshots. The algorithms are executed in a daily cron job. It is possible to define the number of different types of snapshots to keep in the Snapper configuration (see Section 7.4.1, “Managing Existing Configurations” for details).

number

Deletes old snapshots when a certain snapshot count is reached.

timeline

Deletes old snapshots having passed a certain age, but keeps several hourly, daily, monthly, and yearly snapshots.

empty-pre-post

Deletes pre/post snapshot pairs with empty diffs.

7.5.2 Creating Snapshots

Creating a snapshot is done by running snapper create or by clicking Create in the YaST module Snapper. The following examples explain how to create snapshots from the command line. It should be easy to adopt them when using the YaST interface.

Tip
Tip: Snapshot Description

You should always specify a meaningful description to later be able to identify its purpose. Even more information can be specified via the user data option.

snapper create --description "Snapshot for week 2 2014"

Creates a stand-alone snapshot (type single) for the default (root) configuration with a description. Because no cleanup-algorithm is specified, the snapshot will never be deleted automatically.

snapper --config home create --description "Cleanup in ~tux"

Creates a stand-alone snapshot (type single) for a custom configuration named home with a description. Because no cleanup-algorithm is specified, the snapshot will never be deleted automatically.

snapper --config home create --description "Daily data backup" --cleanup-algorithm timeline>

Creates a stand-alone snapshot (type single) for a custom configuration named home with a description. The file will automatically be deleted when it meets the criteria specified for the timeline cleanup-algorithm in the configuration.

snapper create --type pre --print-number --description "Before the Apache config cleanup" --userdata "important=yes"

Creates a snapshot of the type pre and prints the snapshot number. First command needed to create a pair of snapshots used to save a before and after state. The snapshot is marked as important.

snapper create --type post --pre-number 30 --description "After the Apache config cleanup" --userdata "important=yes"

Creates a snapshot of the type post paired with the pre snapshot number 30. Second command needed to create a pair of snapshots used to save a before and after state. The snapshot is marked as important.

snapper create --command COMMAND --description "Before and after COMMAND"

Automatically creates a snapshot pair before and after running COMMAND. This option is only available when using snapper on the command line.

7.5.3 Modifying Snapshot Metadata

Snapper allows you to modify the description, the cleanup algorithm, and the user data of a snapshot. All other metadata cannot be changed. The following examples explain how to modify snapshots from the command line. It should be easy to adopt them when using the YaST interface.

To modify a snapshot on the command line, you need to know its number. Use snapper list to display all snapshots and their numbers.

The YaST Snapper module already lists all snapshots. Choose one from the list and click Modify.

snapper modify --cleanup-algorithm "timeline" 10

Modifies the metadata of snapshot 10 for the default (root) configuration. The cleanup algorithm is set to timeline.

snapper --config home modify --description "daily backup" -cleanup-algorithm "timeline" 120

Modifies the metadata of snapshot 120 for a custom configuration named home. A new description is set and the cleanup algorithm is unset.

7.5.4 Deleting Snapshots

To delete a snapshot with the YaST Snapper module, choose a snapshot from the list and click Delete.

To delete a snapshot with the command line tool, you need to know its number. Get it by running snapper list. To delete a snapshot, run snapper delete NUMBER.

Deleting the current default subvolume snapshot is not allowed.

When deleting snapshots with Snapper, the freed space will be claimed by a Btrfs process running in the background. Thus the visibility and the availability of free space is delayed. In case you need space freed by deleting a snapshot to be available immediately, use the option --sync with the delete command.

Tip
Tip: Deleting Snapshot Pairs

When deleting a pre snapshot, you should always delete its corresponding post snapshot (and vice versa).

snapper delete 65

Deletes snapshot 65 for the default (root) configuration.

snapper -c home delete 89 90

Deletes snapshots 89 and 90 for a custom configuration named home.

snapper delete --sync 23

Deletes snapshot 23 for the default (root) configuration and makes the freed space available immediately.

Tip
Tip: Delete Unreferenced Snapshots

Sometimes the Btrfs snapshot is present but the XML file containing the metadata for Snapper is missing. In this case the snapshot is not visible for Snapper and needs to be deleted manually:

btrfs subvolume delete /.snapshots/SNAPSHOTNUMBER/snapshot
rm -rf /.snapshots/SNAPSHOTNUMBER
Tip
Tip: Old Snapshots Occupy More Disk Space

If you delete snapshots to free space on your hard disk, make sure to delete old snapshots first. The older a snapshot is, the more disk space it occupies.

Snapshots are also automatically deleted by a daily cron job. Refer to Section 7.5.1.2, “Cleanup-algorithms” for details.

7.6 Automatic Snapshot Clean-Up

Snapshots occupy disk space and over time the amount of disk space occupied by the snapshots may become large. To prevent disks from running out of space, Snapper offers algorithms to automatically delete old snapshots. These algorithms differentiate between timeline snapshots and numbered snapshots (administration plus installation snapshot pairs). You can specify the number of snapshots to keep for each type.

In addition to that, you can optionally specify a disk space quota, defining the maximum amount of disk space the snapshots may occupy. It is also possible to automatically delete pre and post snapshots pairs that do not differ.

A clean-up algorithm is always bound to a single Snapper configuration, so you need to configure algorithms for each configuration. To prevent certain snapshots from being automatically deleted, refer to How to make a snapshot permanent? .

The default setup (root) is configured to do clean-up for numbered snapshots and empty pre and post snapshot pairs. Quota support is enabled—snapshots may not occupy more than 50% of the available disk space of the root partition. Timeline snapshots are disabled by default, therefore the timeline clean-up algorithm is also disabled.

7.6.1 Cleaning Up Numbered Snapshots

Cleaning up numbered snapshots—administration plus installation snapshot pairs—is controlled by the following parameters of a Snapper configuration.

NUMBER_CLEANUP

Enables or disables clean-up of installation and admin snapshot pairs. If enabled, snapshot pairs are deleted when the total snapshot count exceeds a number specified with NUMBER_LIMIT and/or NUMBER_LIMIT_IMPORTANT and an age specified with NUMBER_MIN_AGE. Valid values: yes (enable), no (disable).

The default value is "yes".

Example command to change or set:

tux > sudo snapper -c CONFIG set-config "NUMBER_CLEANUP=no"
NUMBER_LIMIT / NUMBER_LIMIT_IMPORTANT

Defines how many regular and/or important installation and administration snapshot pairs to keep. Only the youngest snapshots will be kept. Ignored if NUMBER_CLEANUP is set to "no".

The default value is "2-10" for NUMBER_LIMIT and "4-10" for NUMBER_LIMIT_IMPORTANT.

Example command to change or set:

tux > sudo snapper -c CONFIG set-config "NUMBER_LIMIT=10"
Important
Important: Ranged Compared to Constant Values

In case quota support is enabled (see Section 7.6.5, “Adding Disk Quota Support”) the limit needs to be specified as a minimum-maximum range, for example 2-10. If quota support is disabled, a constant value, for example 10, needs to be provided, otherwise cleaning-up will fail with an error.

NUMBER_MIN_AGE

Defines the minimum age in seconds a snapshot must have before it can automatically be deleted. Snapshots younger than the value specified here will not be deleted, regardless of how many exist.

The default value is "1800".

Example command to change or set:

tux > sudo snapper -c CONFIG set-config "NUMBER_MIN_AGE=864000"
Note
Note: Limit and Age

NUMBER_LIMIT, NUMBER_LIMIT_IMPORTANT and NUMBER_MIN_AGE are always evaluated. Snapshots are only deleted when all conditions are met.

If you always want to keep the number of snapshots defined with NUMBER_LIMIT* regardless of their age, set NUMBER_MIN_AGE to 0.

The following example shows a configuration to keep the last 10 important and regular snapshots regardless of age:

NUMBER_CLEANUP=yes
NUMBER_LIMIT_IMPORTANT=10
NUMBER_LIMIT=10
NUMBER_MIN_AGE=0

On the other hand, if you do not want to keep snapshots beyond a certain age, set NUMBER_LIMIT* to 0 and provide the age with NUMBER_MIN_AGE.

The following example shows a configuration to only keep snapshots younger than ten days:

NUMBER_CLEANUP=yes
NUMBER_LIMIT_IMPORTANT=0
NUMBER_LIMIT=0
NUMBER_MIN_AGE=864000

7.6.2 Cleaning Up Timeline Snapshots

Cleaning up timeline snapshots is controlled by the following parameters of a Snapper configuration.

TIMELINE_CLEANUP

Enables or disables clean-up of timeline snapshots. If enabled, snapshots are deleted when the total snapshot count exceeds a number specified with TIMELINE_LIMIT_* and an age specified with TIMELINE_MIN_AGE. Valid values: yes, no.

The default value is "yes".

Example command to change or set:

tux > sudo snapper -c CONFIG set-config "TIMELINE_CLEANUP=yes"
TIMELINE_LIMIT_DAILY, TIMELINE_LIMIT_HOURLY, TIMELINE_LIMIT_MONTHLY, TIMELINE_LIMIT_WEEKLY, TIMELINE_LIMIT_YEARLY

Number of snapshots to keep for hour, day, month, week, and year.

The default value for each entry is "10", except for TIMELINE_LIMIT_WEEKLY, which is set to "0" by default.

TIMELINE_MIN_AGE

Defines the minimum age in seconds a snapshot must have before it can automatically be deleted.

The default value is "1800".

Example 7.1: Example timeline configuration
TIMELINE_CLEANUP="yes"
TIMELINE_CREATE="yes"
TIMELINE_LIMIT_DAILY="7"
TIMELINE_LIMIT_HOURLY="24"
TIMELINE_LIMIT_MONTHLY="12"
TIMELINE_LIMIT_WEEKLY="4"
TIMELINE_LIMIT_YEARLY="2"
TIMELINE_MIN_AGE="1800"

This example configuration enables hourly snapshots which are automatically cleaned up. TIMELINE_MIN_AGE and TIMELINE_LIMIT_* are always both evaluated. In this example, the minimum age of a snapshot before it can be deleted is set to 30 minutes (1800 seconds). Since we create hourly snapshots, this ensures that only the latest snapshots are kept. If TIMELINE_LIMIT_DAILY is set to not zero, this means that the first snapshot of the day is kept, too.

Snapshots to be Kept
  • Hourly: The last 24 snapshots that have been made.

  • Daily: The first daily snapshot that has been made is kept from the last seven days.

  • Monthly: The first snapshot made on the last day of the month is kept for the last twelve months.

  • Weekly: The first snapshot made on the last day of the week is kept from the last four weeks.

  • Yearly: The first snapshot made on the last day of the year is kept for the last two years.

7.6.3 Cleaning Up Snapshot Pairs That Do Not Differ

As explained in Section 7.1.1, “Types of Snapshots”, whenever you run a YaST module or execute Zypper, a pre snapshot is created on start-up and a post snapshot is created when exiting. In case you have not made any changes there will be no difference between the pre and post snapshots. Such empty snapshot pairs can be automatically be deleted by setting the following parameters in a Snapper configuration:

EMPTY_PRE_POST_CLEANUP

If set to yes, pre and post snapshot pairs that do not differ will be deleted.

The default value is "yes".

EMPTY_PRE_POST_MIN_AGE

Defines the minimum age in seconds a pre and post snapshot pair that does not differ must have before it can automatically be deleted.

The default value is "1800".

7.6.4 Cleaning Up Manually Created Snapshots

Snapper does not offer custom clean-up algorithms for manually created snapshots. However, you can assign the number or timeline clean-up algorithm to a manually created snapshot. If you do so, the snapshot will join the clean-up queue for the algorithm you specified. You can specify a clean-up algorithm when creating a snapshot, or by modifying an existing snapshot:

snapper create --description "Test" --cleanup-algorithm number

Creates a stand-alone snapshot (type single) for the default (root) configuration and assigns the number clean-up algorithm.

snapper modify --cleanup-algorithm "timeline" 25

Modifies the snapshot with the number 25 and assigns the clean-up algorithm timeline.

7.6.5 Adding Disk Quota Support

In addition to the number and/or timeline clean-up algorithms described above, Snapper supports quotas. You can define what percentage of the available space snapshots are allowed to occupy. This percentage value always applies to the Btrfs subvolume defined in the respective Snapper configuration.

If Snapper was enabled during the installation, quota support is automatically enabled. In case you manually enable Snapper at a later point in time, you can enable quota support by running snapper setup-quota. This requires a valid configuration (see Section 7.4, “Creating and Modifying Snapper Configurations” for more information).

Quota support is controlled by the following parameters of a Snapper configuration.

QGROUP

The Btrfs quota group used by Snapper. If not set, run snapper setup-quota. If already set, only change if you are familiar with man 8 btrfs-qgroup. This value is set with snapper setup-quota and should not be changed.

SPACE_LIMIT

Limit of space snapshots are allowed to use in fractions of 1 (100%). Valid values range from 0 to 1 (0.1 = 10%, 0.2 = 20%, ...).

The following limitations and guidelines apply:

  • Quotas are only activated in addition to an existing number and/or timeline clean-up algorithm. If no clean-up algorithm is active, quota restrictions are not applied.

  • With quota support enabled, Snapper will perform two clean-up runs if required. The first run will apply the rules specified for number and timeline snapshots. Only if the quota is exceeded after this run, the quota-specific rules will be applied in a second run.

  • Even if quota support is enabled, Snapper will always keep the number of snapshots specified with the NUMBER_LIMIT* and TIMELINE_LIMIT* values, even if the quota will be exceeded. It is therefore recommended to specify ranged values (MIN-MAX) for NUMBER_LIMIT* and TIMELINE_LIMIT* to ensure the quota can be applied.

    If, for example, NUMBER_LIMIT=5-20 is set, Snapper will perform a first clean-up run and reduce the number of regular numbered snapshots to 20. In case these 20 snapshots exceed the quota, Snapper will delete the oldest ones in a second run until the quota is met. A minimum of five snapshots will always be kept, regardless of the amount of space they occupy.

7.7 Frequently Asked Questions

Why does Snapper Never Show Changes in /var/log, /tmp and Other Directories?

For some directories we decided to exclude them from snapshots. See Section 7.1.2, “Directories That Are Excluded from Snapshots” for a list and reasons. To exclude a path from snapshots we create a subvolume for that path.

How much disk space is used by snapshots? How to free disk space?

Displaying the amount of disk space a snapshot allocates is currently not supported by the Btrfs tools. However, if you have quota enabled, it is possible to determine how much space would be freed if all snapshots would be deleted:

  1. Get the quota group ID (1/0 in the following example):

    tux > sudo snapper -c root get-config | grep QGROUP
    QGROUP                 | 1/0
  2. Rescan the subvolume quotas:

    tux > sudo btrfs quota rescan -w /
  3. Show the data of the quota group (1/0 in the following example):

    tux > sudo btrfs qgroup show / | grep "1/0"
    1/0           4.80GiB    108.82MiB

    The third column shows the amount of space that would be freed when deleting all snapshots (108.82MiB).

To free space on a Btrfs partition containing snapshots you need to delete unneeded snapshots rather than files. Older snapshots occupy more space than recent ones. See Section 7.1.3.4, “Controlling Snapshot Archiving” for details.

Doing an upgrade from one service pack to another results in snapshots occupying a lot of disk space on the system subvolumes, because a lot of data gets changed (package updates). Manually deleting these snapshots after they are no longer needed is recommended. See Section 7.5.4, “Deleting Snapshots” for details.

Can I Boot a Snapshot from the Boot Loader?

Yes—refer to Section 7.3, “System Rollback by Booting from Snapshots” for details.

How to make a snapshot permanent?

Currently Snapper does not offer means to prevent a snapshot from being deleted manually. However, you can prevent snapshots from being automatically deleted by clean-up algorithms. Manually created snapshots (see Section 7.5.2, “Creating Snapshots”) have no clean-up algorithm assigned unless you specify one with --cleanup-algorithm. Automatically created snapshots always either have the number or timeline algorithm assigned. To remove such an assignment from one or more snapshots, proceed as follows:

  1. List all available snapshots:

    tux > sudo snapper list -a
  2. Memorize the number of the snapshot(s) you want to prevent from being deleted.

  3. Run the following command and replace the number placeholders with the number(s) you memorized:

    tux > sudo snapper modify --cleanup-algorithm "" #1 #2 #n
  4. Check the result by running snapper list -a again. The entry in the column Cleanup should now be empty for the snapshots you modified.

Where can I get more information on Snapper?

See the Snapper home page at http://snapper.io/.

8 Remote Access with VNC

  • Filename: vnc.xml
  • ID: cha.vnc
Abstract

Virtual Network Computing (VNC) enables you to control a remote computer via a graphical desktop (as opposed to a remote shell access). VNC is platform-independent and lets you access the remote machine from any operating system.

SUSE Linux Enterprise Desktop supports two different kinds of VNC sessions: One-time sessions that live as long as the VNC connection from the client is kept up, and persistent sessions that live until they are explicitly terminated.

Note
Note: Session Types

A machine can offer both kinds of sessions simultaneously on different ports, but an open session cannot be converted from one type to the other.

8.1 The vncviewer Client

To connect to a VNC service provided by a server, a client is needed. The default in SUSE Linux Enterprise Desktop is vncviewer, provided by the tigervnc package.

8.1.1 Connecting Using the vncviewer CLI

To start your VNC viewer and initiate a session with the server, use the command:

tux > vncviewer jupiter.example.com:1

Instead of the VNC display number you can also specify the port number with two colons:

tux > vncviewer jupiter.example.com::5901
Note
Note: Display and Port Number

The actual display or port number you specify in the VNC client must be the same as the display or port number picked by the vncserver command on the target machine. See Section 8.4, “Persistent VNC Sessions” for further info.

8.1.2 Connecting Using the vncviewer GUI

By running vncviewer without specifying --listen or a host to connect to, it will show a window to ask for connection details. Enter the host into the VNC server field like in Section 8.1.1, “Connecting Using the vncviewer CLI” and click Connect.

vncviewer asking for connection details
Figure 8.1: vncviewer

8.1.3 Notification of Unencrypted Connections

The VNC protocol supports different kinds of encrypted connections, not to be confused with password authentication. If a connection does not use TLS, the text (Connection not encrypted!) can be seen in the window title of the VNC viewer.

8.2 Remmina: the Remote Desktop Client

Remmina is a modern and feature rich remote desktop client. It supports several access methods, for example VNC, SSH, RDP, or Spice.

8.2.1 Installation

To use Remmina, verify whether the remmina package is installed on your system, and install it if not. Remember to install the VNC plug-in for Remmina as well:

root # zypper in remmina remmina-plugin-vnc

8.2.2 Main Window

Run Remmina by entering the remmina command.

Remmina's Main Window
Figure 8.2: Remmina's Main Window

The main application window shows the list of stored remote sessions. Here you can add and save a new remote session, quick-start a new session without saving it, start a previously saved session, or set Remmina's global preferences.

8.2.3 Adding Remote Sessions

To add and save a new remote session, click Add new session in the top left of the main window. The Remote Desktop Preference window opens.

Remote Desktop Preference
Figure 8.3: Remote Desktop Preference

Complete the fields that specify your newly added remote session profile. The most important are:

Name

Name of the profile. It will be listed in the main window.

Protocol

The protocol to use when connection to the remote session, for example VNC.

Server

The IP or DNS address and display number of the remote server.

User name, Password

Credentials to use for remote authentication. Leave empty for no authentication.

Color depth, Quality

Select the best options according to you connection speed and quality.

Select the Advanced tab to enter more specific settings.

Tip
Tip: Disable Encryption

If the communication between the client and the remote server is not encrypted, activate Disable encryption, otherwise the connection fails.

Select the SSH tab for advanced SSH tunneling and authentication options.

Confirm with Save. Your new profile will be listed in the main window.

8.2.4 Starting Remote Sessions

You can either start a previously saved session, or quick-start a remote session without saving the connection details.

8.2.4.1 Quick-starting Remote Sessions

To start a remote session quickly without proper adding and saving connection details, use the drop-down box and text field at the top of the main window.

Quick-starting
Figure 8.4: Quick-starting

Select the communication protocol from the drop-down box, for example 'VNC', then enter the VNC server DNS or IP address followed by a colon and a display number, and confirm with Enter.

8.2.4.2 Opening Saved Remote Sessions

To open a specific remote session, double-click it from the list of sessions.

8.2.4.3 Remote Sessions Window

Remote sessions are opened in tabs of a separate window. Each tab hosts one session. The toolbar on the left of the window helps you manage the windows / sessions, such as toggle fullscreen mode, resize the window to match the display size of the session, send specific keystrokes to the session, take screenshots of the session, or set the image quality.

Remmina Viewing SLES 15 Remote Session
Figure 8.5: Remmina Viewing SLES 15 Remote Session

8.2.5 Editing, Copying, and Deleting Saved Sessions

To edit a saved remote session, right-click its name in the Remmina's main window and select Edit. Refer to Section 8.2.3, “Adding Remote Sessions” for the description of the relevant fields.

To copy a saved remote session, right-click its name in the Remmina's main window and select Copy. In the Remote Desktop Preference window, change the name of the profile, optionally adjust relevant options, and confirm with Save.

To Delete a saved remote session, right-click its name in the Remmina's main window and select Delete. Confirm with Yes in the next dialog.

8.2.6 Running Remote Sessions from the Command Line

If you need to open a remote session from the command line or from a batch file without first opening the main application window, use the following syntax:

 tux > remmina -c profile_name.remmina

Remmina's profile files are stored in the .local/share/remmina/ directory in your home directory. To determine which profile file belongs to the session you want to open, run Remmina, click the session name in the main window, and read the path to the profile file in the window's status line at the bottom.

Reading Path to the Profile File
Figure 8.6: Reading Path to the Profile File

While Remmina is not running, you can rename the profile file to to a more reasonable file name, such as sle15.remmina. You can even copy the profile file to your custom directory and run it using the remmina -c command from there.

8.3 One-time VNC Sessions

A one-time session is initiated by the remote client. It starts a graphical login screen on the server. This way you can choose the user which starts the session and, if supported by the login manager, the desktop environment. When you terminate the client connection to such a VNC session, all applications started within that session will be terminated, too. One-time VNC sessions cannot be shared, but it is possible to have multiple sessions on a single host at the same time.

Procedure 8.1: Enabling One-time VNC Sessions
  1. Start YaST › Network Services › Remote Administration (VNC).

  2. Check Allow Remote Administration Without Session Management.

  3. Activate Enable access using a web browser if you plan to access the VNC session in a Web browser window.

  4. If necessary, also check Open Port in Firewall (for example, when your network interface is configured to be in the External Zone). If you have more than one network interface, restrict opening the firewall ports to a specific interface via Firewall Details.

  5. Confirm your settings with Next.

  6. In case not all needed packages are available yet, you need to approve the installation of missing packages.

    Tip
    Tip: Restart the Display Manager

    YaST makes changes to the display manager settings. You need to log out of your current graphical session and restart the display manager for the changes to take effect.

Remote Administration
Figure 8.7: Remote Administration

8.3.1 Available Configurations

The default configuration on SUSE Linux Enterprise Desktop serves sessions with a resolution of 1024x768 pixels at a color depth of 16-bit. The sessions are available on ports 5901 for regular VNC viewers (equivalent to VNC display 1) and on port 5801 for Web browsers.

Other configurations can be made available on different ports. Ask your system administrator for details if you need to modify the configuration.

VNC display numbers and X display numbers are independent in one-time sessions. A VNC display number is manually assigned to every configuration that the server supports (:1 in the example above). Whenever a VNC session is initiated with one of the configurations, it automatically gets a free X display number.

By default, both the VNC client and server try to communicate securely via a self-signed SSL certificate, which is generated after installation. You can either use the default one, or replace it with your own. When using the self-signed certificate, you need to confirm its signature before the first connection.

8.3.2 Initiating a One-time VNC Session

To connect to a one-time VNC session, a VNC viewer must be installed, see also Section 8.1, “The vncviewer Client”.

8.3.3 Configuring One-time VNC Sessions

You can skip this section, if you do not need or want to modify the default configuration.

One-time VNC sessions are started via the systemd socket xvnc.socket. By default it offers six configuration blocks: three for VNC viewers (vnc1 to vnc3), and three serving a Java applet (vnchttpd1 to vnchttpd3). By default only vnc1 and vnchttpd1 are active.

To activate the VNC server socket at boot time, run the following command:

sudo systemctl enable xvnc.socket

To start the socket immediately, run:

sudo systemctl start xvnc.socket

The Xvnc server can be configured via the server_args option. For a list of options, see Xvnc --help.

When adding custom configurations, make sure they are not using ports that are already in use by other configurations, other services, or existing persistent VNC sessions on the same host.

Activate configuration changes by entering the following command:

tux > sudo systemctl reload xvnc.socket
Important
Important: Firewall and VNC Ports

When activating Remote Administration as described in Procedure 8.1, “Enabling One-time VNC Sessions”, the ports 5801 and 5901 are opened in the firewall. If the network interface serving the VNC sessions is protected by a firewall, you need to manually open the respective ports when activating additional ports for VNC sessions. See Chapter 15, Masquerading and Firewalls for instructions.

8.4 Persistent VNC Sessions

A persistent session can be accessed from multiple clients simultaneously. This is ideal for demonstration purposes where one client has full access and all other clients have view-only access. Another use case are trainings where the trainer might need access to the trainee's desktop.

Tip
Tip: Connecting to a Persistent VNC Session

To connect to a persistent VNC session, a VNC viewer must be installed. Refer to Section 8.1, “The vncviewer Client” for more details.

There are two types of persistent VNC sessions:

8.4.1 VNC Session Initiated using vncserver

This type of persistent VNC session is initiated on the server. The session and all applications started in this session run regardless of client connections until the session is terminated. Access to persistent sessions is protected by two possible types of passwords:

  • a regular password that grants full access or

  • an optional view-only password that grants a non-interactive (view-only) access.

A session can have multiple client connections of both kinds at once.

Procedure 8.2: Starting a Persistent VNC Session using vncserver
  1. Open a shell and make sure you are logged in as the user that should own the VNC session.

  2. If the network interface serving the VNC sessions is protected by a firewall, you need to manually open the port used by your session in the firewall. If starting multiple sessions you may alternatively open a range of ports. See Chapter 15, Masquerading and Firewalls for details on how to configure the firewall.

    vncserver uses the ports 5901 for display :1, 5902 for display :2, and so on. For persistent sessions, the VNC display and the X display usually have the same number.

  3. To start a session with a resolution of 1024x769 pixel and with a color depth of 16-bit, enter the following command:

    vncserver -alwaysshared -geometry 1024x768 -depth 16

    The vncserver command picks an unused display number when none is given and prints its choice. See man 1 vncserver for more options.

When running vncserver for the first time, it asks for a password for full access to the session. If needed, you can also provide a password for view-only access to the session.

The password(s) you are providing here are also used for future sessions started by the same user. They can be changed with the vncpasswd command.

Important
Important: Security Considerations

Make sure to use strong passwords of significant length (eight or more characters). Do not share these passwords.

To terminate the session shut down the desktop environment that runs inside the VNC session from the VNC viewer as you would shut it down if it was a regular local X session.

If you prefer to manually terminate a session, open a shell on the VNC server and make sure you are logged in as the user that owns the VNC session you want to terminate. Run the following command to terminate the session that runs on display :1: vncserver -kill :1

8.4.1.1 Configuring Persistent VNC Sessions

Persistent VNC sessions can be configured by editing $HOME/.vnc/xstartup. By default this shell script starts the same GUI/window manager it was started from. In SUSE Linux Enterprise Desktop this will either be GNOME or IceWM. If you want to start your session with a window manager of your choice, set the variable WINDOWMANAGER:

WINDOWMANAGER=gnome vncserver -geometry 1024x768
WINDOWMANAGER=icewm vncserver -geometry 1024x768
Note
Note: One Configuration for Each User

Persistent VNC sessions are configured in a single per-user configuration. Multiple sessions started by the same user will all use the same start-up and password files.

8.4.2 VNC Session Initiated using vncmanager

Procedure 8.3: Enabling Persistent VNC Sessions
  1. Start YaST › Network Services › Remote Administration (VNC).

  2. Activate Allow Remote Administration With Session Management.

  3. Activate Enable access using a web browser if you plan to access the VNC session in a Web browser window.

  4. If necessary, also check Open Port in Firewall (for example, when your network interface is configured to be in the External Zone). If you have more than one network interface, restrict opening the firewall ports to a specific interface via Firewall Details.

  5. Confirm your settings with Next.

  6. In case not all needed packages are available yet, you need to approve the installation of missing packages.

    Tip
    Tip: Restart the Display Manager

    YaST makes changes to the display manager settings. You need to log out of your current graphical session and restart the display manager for the changes to take effect.

8.4.2.1 Configuring Persistent VNC Sessions

After you enable the VNC session management as described in Procedure 8.3, “Enabling Persistent VNC Sessions”, you can normally connect to the remote session with your favorite VNC viewer, such as vncviewer or Remmina. You will be presented with login screen. After you log in, the 'VNC' icon will appear in the system tray of your desktop environment. Click the icon to open the VNC Session window. If it does not appear or if your desktop environment does not support icons in the system tray, run vncmanager-controller manually.

VNC Session Settings
Figure 8.8: VNC Session Settings

There are several settings which influence the VNC session behavior:

Non-persistent, private

This is equivalent to one-time session. Such session is not visible to others and will be terminated after you disconnect from it. Refer to Section 8.3, “One-time VNC Sessions” for more information.

Persistent, visible

The session is visible to other users and keeps running even after you disconnect from it.

Session name

Here you can specify the name of the persistent session so that it is easily identified when reconnecting.

No password required

The session will be freely accessible without having to log in under user credentials.

Require user login

You need to log in with a valid user name and password to access the session. List the valid user names in the Allowed users text box.

Allow one client at time

Disables joining the session by multiple users at the same time.

Allow multiple clients at time

Allows multiple users to join the persistent session at the same time. Good for example for remote presentations or trainings.

Confirm with OK.

8.4.2.2 Joining Persistent VNC Sessions

After you set up a persistent VNC session as described in Section 8.4.2.1, “Configuring Persistent VNC Sessions”, you can join it with your VNC viewer. After the your VNC client connects to the server, you will be prompted to choose whether you want to create a new session, or join the existing one:

Joining a Persistent VNC Session
Figure 8.9: Joining a Persistent VNC Session

After you click the name of the existing session, you may be asked for login credentials, depending on the persistent session settings.

8.5 Encrypted VNC Communication

If the VNC server is set up properly, all communication between the VNC server and the client is encrypted. The authentication happens at the beginning of the session, the actual data transfer only begins afterward.

Whether for a one-time or a persistent VNC session, security options are configured via the -securitytypes parameter of the /usr/bin/Xvnc command located on the server_args line. The -securitytypes parameter selects both authentication method and encryption. It has the following options:

Authentications
None, TLSNone, X509None

No authentication.

VncAuth, TLSVnc, X509Vnc

Authentication using custom password.

Plain, TLSPlain, X509Plain

Authentication using PAM to verify user's password.

Encryptions
None, VncAuth, Plain

No encryption.

TLSNone, TLSVnc, TLSPlain

Anonymous TLS encryption. Everything is encrypted, but there is no verification of the remote host. So you are protected against passive attackers, but not against man-in-the-middle attackers.

X509None, X509Vnc, X509Plain

TLS encryption with certificate. If you use a self-signed certificate, you will be asked to verify it on the first connection. On subsequent connections you will be warned only if the certificate changed. So you are protected against everything except man-in-the-middle on the first connection (similar to typical SSH usage). If you use a certificate signed by a certificate authority matching the machine name, then you get full security (similar to typical HTTPS usage).

Tip
Tip: Path to Certificate and Key

With X509 based encryption, you need to specify the path to the X509 certificate and the key with -X509Cert and -X509Key options.

If you select multiple security types separated by comma, the first one supported and allowed by both client and server will be used. That way you can configure opportunistic encryption on the server. This is useful if you need to support VNC clients that do not support encryption.

On the client, you can also specify the allowed security types to prevent a downgrade attack if you are connecting to a server which you know has encryption enabled (although our vncviewer will warn you with the "Connection not encrypted!" message in that case).

9 File Copying with RSync

  • Filename: net_sync.xml
  • ID: cha.net.sync
Abstract

Today, a typical user has several computers: home and workplace machines, a laptop, a smartphone or a tablet. This makes the task of keeping files and documents in sync across multiple devices all more important.

Warning
Warning: Risk of Data Loss

Before you start using a synchronization tool, you should familiarize yourself with its features and functionality. Make sure to back up your important files.

9.1 Conceptual Overview

For synchronizing a large amount of data over a slow network connection, Rsync offers a reliable method of transmitting only changes within files. This applies not only to text files but also binary files. To detect the differences between files, Rsync subdivides the files into blocks and computes check sums over them.

Detecting changes requires some computing power. So make sure that machines on both ends have enough resources, including RAM.

Rsync can be particularly useful when large amounts of data containing only minor changes need to be transmitted regularly. This is often the case when working with backups. Rsync can also be useful for mirroring staging servers that store complete directory trees of Web servers to a Web server in a DMZ.

Despite its name, Rsync is not a synchronization tool. Rsync is a tool that copies data only in one direction at a time. It does not and cannot do the reverse. If you need a bidirectional tool which is able to synchronize both source and destination, use Csync.

9.2 Basic Syntax

Rsync is a command-line tool that has the following basic syntax:

rsync [OPTION] SOURCE [SOURCE]... DEST

You can use Rsync on any local or remote machine, provided you have access and write permissions. It is possible to have multiple SOURCE entries. The SOURCE and DEST placeholders can be paths, URLs, or both.

Below are the most common Rsync options:

-v

Outputs more verbose text

-a

Archive mode; copies files recursively and preserves timestamps, user/group ownership, file permissions, and symbolic links

-z

Compresses the transmitted data

Note
Note: Trailing Slashes Count

When working with Rsync, you should pay particular attention to trailing slashes. A trailing slash after the directory denotes the content of the directory. No trailing slash denotes the directory itself.

9.3 Copying Files and Directories Locally

The following description assumes that the current user has write permissions to the directory /var/backup. To copy a single file from one directory on your machine to another path, use the following command:

tux > rsync -avz backup.tar.xz /var/backup/

The file backup.tar.xz is copied to /var/backup/; the absolute path will be /var/backup/backup.tar.xz.

Do not forget to add the trailing slash after the /var/backup/ directory! If you do not insert the slash, the file backup.tar.xz is copied to /var/backup (file) not inside the directory /var/backup/!

Copying a directory is similar to copying a single file. The following example copies the directory tux/ and its content into the directory /var/backup/:

tux > rsync -avz tux /var/backup/

Find the copy in the absolute path /var/backup/tux/.

9.4 Copying Files and Directories Remotely

The Rsync tool is required on both machines. To copy files from or to remote directories requires an IP address or a domain name. A user name is optional if your current user names on the local and remote machine are the same.

To copy the file file.tar.xz from your local host to the remote host 192.168.1.1 with same users (being local and remote), use the following command:

tux > rsync -avz file.tar.xz  tux@192.168.1.1:

Depending on what you prefer, these commands are also possible and equivalent:

tux > rsync -avz file.tar.xz 192.168.1.1:~
tux > rsync -avz file.tar.xz 192.168.1.1:/home/tux

In all cases with standard configuration, you will be prompted to enter your passphrase of the remote user. This command will copy file.tar.xz to the home directory of user tux (usually /home/tux).

Copying a directory remotely is similar to copying a directory locally. The following example copies the directory tux/ and its content into the remote directory /var/backup/ on the 192.168.1.1 host:

tux > rsync -avz tux 192.168.1.1:/var/backup/

Assuming you have write permissions on the host 192.168.1.1, you will find the copy in the absolute path /var/backup/tux.

9.5 Configuring and Using an Rsync Server

Rsync can run as a daemon (rsyncd) listing on default port 873 for incoming connections. This daemon can receive copying targets.

The following description explains how to create an Rsync server on jupiter with a backup target. This target can be used to store your backups. To create an Rsync server, do the following:

Procedure 9.1: Setting Up an Rsync Server
  1. On jupiter, create a directory to store all your backup files. In this example, we use /var/backup:

    root # mkdir /var/backup
  2. Specify ownership. In this case, the directory is owned by user tux in group users:

    root # chown tux.users /var/backup
  3. Configure the rsyncd daemon.

    We will separate the configuration file into a main file and some modules which hold your backup target. This makes it easier to add additional targets later. Global values can be stored in /etc/rsyncd.d/*.inc files, whereas your modules are placed in /etc/rsyncd.d/*.conf files:

    1. Create a directory /etc/rsyncd.d/:

      root # mkdir /etc/rsyncd.d/
    2. In the main configuration file /etc/rsyncd.conf, add the following lines:

      # rsyncd.conf main configuration file
      log file = /var/log/rsync.log
      pid file = /var/lock/rsync.lock
      
      &merge /etc/rsyncd.d 1
      &include /etc/rsyncd.d 2

      1

      Merges global values from /etc/rsyncd.d/*.inc files into the main configuration file.

      2

      Loads any modules (or targets) from /etc/rsyncd.d/*.conf files. These files should not contain any references to global values.

    3. Create your module (your backup target) in the file /etc/rsyncd.d/backup.conf with the following lines:

      # backup.conf: backup module
      [backup] 1
         uid = tux 2
         gid = users 2
         path = /var/backup 3
         auth users = tux  4
         secrets file = /etc/rsyncd.secrets 5
         comment = Our backup target

      1

      The backup target. You can use any name you like. However, it is a good idea to name a target according to its purpose and use the same name in your *.conf file.

      2

      Specifies the user name or group name that is used when the file transfer takes place.

      3

      Defines the path to store your backups (from Step 1).

      4

      Specifies a comma-separated list of allowed users. In its simplest form, it contains the user names that are allowed to connect to this module. In our case, only user tux is allowed.

      5

      Specifies the path of a file that contains lines with user names and plain passwords.

    4. Create the /etc/rsyncd.secrets file with the following content and replace PASSPHRASE:

      # user:passwd
      tux:PASSPHRASE
    5. Make sure the file is only readable by root:

      root # chmod 0600 /etc/rsyncd.secrets
  4. Start and enable the rsyncd daemon with:

    root # systemctl enable rsyncd
    root # systemctl start rsyncd
  5. Test the access to your Rsync server:

    tux > rsync jupiter::

    You should see a response that looks like this:

    backup          Our backup target

    Otherwise, check your configuration file, firewall and network settings.

The above steps create an Rsync server that can now be used to store backups. The example also creates a log file listing all connections. This file is stored in /var/log/rsyncd.log. This is useful if you want to debug your transfers.

To list the content of your backup target, use the following command:

rsync -avz jupiter::backup

This command lists all files present in the directory /var/backup on the server. This request is also logged in the log file /var/log/rsyncd.log. To start an actual transfer, provide a source directory. Use . for the current directory. For example, the following command copies the current directory to your Rsync backup server:

rsync -avz . jupiter::backup

By default, Rsync does not delete files and directories when it runs. To enable deletion, the additional option --delete must be stated. To ensure that no newer files are deleted, the option --update can be used instead. Any conflicts that arise must be resolved manually.

9.6 For More Information

CSync

Bidirectional file synchronizer, see https://www.csync.org/.

RSnapshot

Creates incremental backups, see http://rsnapshot.org.

Unison

A file synchronizer similar to CSync but with a graphical interface, see http://www.seas.upenn.edu/~bcpierce/unison/.

Rear

A disaster recovery framework, see the Administration Guide of the SUSE Linux Enterprise High Availability Extension https://www.suse.com/documentation/sle-ha-12/.

10 GNOME Configuration for Administrators

  • Filename: gnome_admin.xml
  • ID: cha.gnome.gconf

This chapter introduces GNOME configuration options which administrators can use to adjust system-wide settings, such as customizing menus, installing themes, configuring fonts, changing preferred applications, and locking down capabilities.

These configuration options are stored in the GConf system. Access the GConf system with tools such as the gconftool-2 command line interface or the gconf-editor GUI tool.

10.1 Starting Applications Automatically

To automatically start applications in GNOME, use one of the following methods:

  • To run applications for each user:  Put .desktop files in /usr/share/gnome/autostart.

  • To run applications for an individual user:  Put .desktop files in ~/.config/autostart.

To disable an application that starts automatically, add X-Autostart-enabled=false to the .desktop file.

10.2 Automounting and Managing Media Devices

GNOME Files (nautilus) monitors volume-related events and responds with a user-specified policy. You can use GNOME Files to automatically mount hotplugged drives and inserted removable media, automatically run programs, and play audio CDs or video DVDs. GNOME Files can also automatically import photos from a digital camera.

System administrators can set system-wide defaults. For more information, see Section 10.3, “Changing Preferred Applications”.

10.3 Changing Preferred Applications

To change users' preferred applications, edit /etc/gnome_defaults.conf. Find further hints within this file.

For more information about MIME types, see http://www.freedesktop.org/Standards/shared-mime-info-spec.

10.4 Adding Document Templates

To add document templates for users, fill in the Templates directory in a user's home directory. You can do this manually for each user by copying the files into ~/Templates, or system-wide by adding a Templates directory with documents to /etc/skel before the user is created.

A user creates a new document from a template by right-clicking the desktop and selecting Create Document.

10.5 For More Information

For more information, see http://help.gnome.org/admin/.

Part II Booting a Linux System

11 Introduction to the Booting Process

Booting a Linux system involves different components and tasks. The hardware itself is initialized by the BIOS or the UEFI, which starts the kernel by means of a boot loader. After this point, the boot process is completely controlled by the operating system and handled by systemd. systemd provides a set of targets that boot setups for everyday usage, maintenance or emergencies.

12 UEFI (Unified Extensible Firmware Interface)

UEFI (Unified Extensible Firmware Interface) is the interface between the firmware that comes with the system hardware, all the hardware components of the system, and the operating system.

13 The Boot Loader GRUB 2

This chapter describes how to configure GRUB 2, the boot loader used in SUSE® Linux Enterprise Desktop. It is the successor to the traditional GRUB boot loader—now called GRUB Legacy. GRUB 2 has been the default boot loader in SUSE® Linux Enterprise Desktop since version 12. A YaST module is available for configuring the most important settings. The boot procedure as a whole is outlined in Chapter 11, Introduction to the Booting Process. For details on Secure Boot support for UEFI machines, see Chapter 12, UEFI (Unified Extensible Firmware Interface).

14 The systemd Daemon

The program systemd is the process with process ID 1. It is responsible for initializing the system in the required way. systemd is started directly by the kernel and resists signal 9, which normally terminates processes. All other programs are either started directly by systemd or by one of its chi…

11 Introduction to the Booting Process

  • Filename: bootconcept.xml
  • ID: cha.boot
Abstract

Booting a Linux system involves different components and tasks. The hardware itself is initialized by the BIOS or the UEFI, which starts the kernel by means of a boot loader. After this point, the boot process is completely controlled by the operating system and handled by systemd. systemd provides a set of targets that boot setups for everyday usage, maintenance or emergencies.

11.1 The Linux Boot Process

The Linux boot process consists of several stages, each represented by a different component. The following list briefly summarizes the boot process and features all the major components involved:

  1. BIOS/UEFI.  After turning on the computer, the BIOS or the UEFI initializes the screen and keyboard, and tests the main memory. Up to this stage, the machine does not access any mass storage media. Subsequently, the information about the current date, time, and the most important peripherals are loaded from the CMOS values. When the first hard disk and its geometry are recognized, the system control passes from the BIOS to the boot loader. If the BIOS supports network booting, it is also possible to configure a boot server that provides the boot loader. On AMD64/Intel 64 systems, PXE boot is needed. Other architectures commonly use the BOOTP protocol to get the boot loader. For more information on UEFI, refer to Chapter 12, UEFI (Unified Extensible Firmware Interface).

  2. Boot Loader.  The first physical 512-byte data sector of the first hard disk is loaded into the main memory and the boot loader that resides at the beginning of this sector takes over. The commands executed by the boot loader determine the remaining part of the boot process. Therefore, the first 512 bytes on the first hard disk are called the Master Boot Record (MBR). The boot loader then passes control to the actual operating system, in this case, the Linux kernel. More information about GRUB 2, the Linux boot loader, can be found in Chapter 13, The Boot Loader GRUB 2. For a network boot, the BIOS acts as the boot loader. It gets the boot image from the boot server and starts the system. This is completely independent of local hard disks.

    If the root file system fails to mount from within the boot environment, it must be checked and repaired before the boot can continue. The file system checker will be automatically started for Ext3 and Ext4 file systems. The repair process is not automated for XFS and Btrfs file systems, and the user is be presented with information describing the options available to repair the file system. When the file system has been successfully repaired, exiting the boot environment will cause the system to retry mounting the root file system. If successful, the boot will continue normally.

  3. Kernel and initramfs To pass system control, the boot loader loads both the kernel and an initial RAM-based file system (initramfs) into memory. The contents of the initramfs can be used by the kernel directly. initramfs contains a small executable called init that handles the mounting of the real root file system. If special hardware drivers are needed before the mass storage can be accessed, they must be in initramfs. For more information about initramfs, refer to Section 11.2, “initramfs. If the system does not have a local hard disk, the initramfs must provide the root file system for the kernel. This can be done using a network block device like iSCSI or SAN, but it is also possible to use NFS as the root device.

    Note
    Note: The init Process Naming

    Two different programs are commonly named init:

    1. the initramfs process mounting the root file system

    2. the operating system process setting up the system

    In this chapter we will therefore refer to them as init on initramfs and systemd, respectively.

  4. init on initramfs This program performs all actions needed to mount the proper root file system. It provides kernel functionality for the needed file system and device drivers for mass storage controllers with udev. After the root file system has been found, it is checked for errors and mounted. If this is successful, the initramfs is cleaned and the systemd daemon on the root file system is executed. For more information about init on initramfs, refer to Section 11.3, “Init on initramfs. Find more information about udev in Chapter 22, Dynamic Kernel Device Management with udev.

  5. systemd By starting services and mounting file systems, systemd handles the actual booting of the system. systemd is described in Chapter 14, The systemd Daemon.

11.2 initramfs

initramfs is a small cpio archive that the kernel can load into a RAM disk. It provides a minimal Linux environment that enables the execution of programs before the actual root file system is mounted. This minimal Linux environment is loaded into memory by BIOS or UEFI routines and does not have specific hardware requirements other than sufficient memory. The initramfs archive must always provide an executable named init that executes the systemd daemon on the root file system for the boot process to proceed.

Before the root file system can be mounted and the operating system can be started, the kernel needs the corresponding drivers to access the device on which the root file system is located. These drivers may include special drivers for certain kinds of hard disks or even network drivers to access a network file system. The needed modules for the root file system may be loaded by init on initramfs. After the modules are loaded, udev provides the initramfs with the needed devices. Later in the boot process, after changing the root file system, it is necessary to regenerate the devices. This is done by the systemd unit udev.service with the command udevtrigger.

If you need to change hardware (for example, hard disks), and this hardware requires different drivers to be in the kernel at boot time, you must update the initramfs file. This is done by calling dracut -f (the option -f overwrites the existing initramfs file). To add a driver for the new hardware, edit /etc/dracut.conf.d/01-dist.conf and add the following line. If the file does not exist, create it.

force_drivers+="DRIVER1"

Replace DRIVER1 with the module name of the driver. If you need to add more than one driver, list them space-separated (DRIVER1 DRIVER2).

Important
Important: Updating initramfs or init

The boot loader loads initramfs or init in the same way as the kernel. It is not necessary to re-install GRUB 2 after updating initramfs or init, because GRUB 2 searches the directory for the right file when booting.

Tip
Tip: Changing Kernel Variables

If you change the values of kernel variables via the sysctl interface by editing related files (/etc/sysctl.conf or /etc/sysctl.d/*.conf), the change will be lost on the next system reboot. Even if you load the values with sysctl --system at runtime, the changes are not saved into the initramfs file. You need to update it by calling dracut -f (the option -f overwrites the existing initramfs file).

11.3 Init on initramfs

The main purpose of init on initramfs is to prepare the mounting of and access to the real root file system. Depending on your system configuration, init on initramfs is responsible for the following tasks.

Loading Kernel Modules

Depending on your hardware configuration, special drivers may be needed to access the hardware components of your computer (the most important component being your hard disk). To access the final root file system, the kernel needs to load the proper file system drivers.

Providing Block Special Files

For each loaded module, the kernel generates device events. udev handles these events and generates the required special block files on a RAM file system in /dev. Without those special files, the file system and other devices would not be accessible.

Managing RAID and LVM Setups

If you configured your system to hold the root file system under RAID or LVM, init on initramfs sets up LVM or RAID to enable access to the root file system later.

To change your /usr or swap partitions directly without the help of YaST, further actions are needed. If you forget these steps, your system will start in emergency mode. To avoid starting in emergency mode, perform the following steps:

Procedure 11.1: Updating Init RAM Disk When Switching to Logical Volumes
  1. Edit the corresponding entry in /etc/fstab and replace your previous partitions with the logical volume.

  2. Execute the following commands:

    root # mount -a
    root # swapon -a
  3. Regenerate your initial RAM disk (initramfs) with mkinitrd or dracut.

  4. For z Systems, additionally run grub2-install.

Find more information about RAID and LVM in Chapter 9, Advanced Disk Setup.

Managing Network Configuration

If you configured your system to use a network-mounted root file system (mounted via NFS), init on initramfs must make sure that the proper network drivers are loaded and that they are set up to allow access to the root file system.

If the file system resides on a network block device like iSCSI or SAN, the connection to the storage server is also set up by init on initramfs. SUSE Linux Enterprise Desktop supports booting from a secondary iSCSI target if the primary target is not available. .

When init on initramfs is called during the initial boot as part of the installation process, its tasks differ from those mentioned above:

Finding the Installation Medium

When starting the installation process, your machine loads an installation kernel and a special init containing the YaST installer. The YaST installer is running in a RAM file system and needs to have information about the location of the installation medium to access it for installing the operating system.

Initiating Hardware Recognition and Loading Appropriate Kernel Modules

As mentioned in Section 11.2, “initramfs, the boot process starts with a minimum set of drivers that can be used with most hardware configurations. init starts an initial hardware scanning process that determines the set of drivers suitable for your hardware configuration. These drivers are used to generate a custom initramfs that is needed to boot the system. If the modules are not needed for boot but for coldplug, the modules can be loaded with systemd; for more information, see Section 14.6.4, “Loading Kernel Modules”.

Loading the Installation System

When the hardware is properly recognized, the appropriate drivers are loaded. The udev program creates the special device files and init starts the installation system with the YaST installer.

Starting YaST

Finally, init starts YaST, which starts package installation and system configuration.

12 UEFI (Unified Extensible Firmware Interface)

  • Filename: uefi.xml
  • ID: cha.uefi

UEFI (Unified Extensible Firmware Interface) is the interface between the firmware that comes with the system hardware, all the hardware components of the system, and the operating system.

UEFI is becoming more and more available on PC systems and thus is replacing the traditional PC-BIOS. UEFI, for example, properly supports 64-bit systems and offers secure booting (Secure Boot, firmware version 2.3.1c or better required), which is one of its most important features. Lastly, with UEFI a standard firmware will become available on all x86 platforms.

UEFI additionally offers the following advantages:

  • Booting from large disks (over 2 TiB) with a GUID Partition Table (GPT).

  • CPU-independent architecture and drivers.

  • Flexible pre-OS environment with network capabilities.

  • CSM (Compatibility Support Module) to support booting legacy operating systems via a PC-BIOS-like emulation.

For more information, see http://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface. The following sections are not meant as a general UEFI overview; these are only hints about how some features are implemented in SUSE Linux Enterprise Desktop.

12.1 Secure Boot

In the world of UEFI, securing the bootstrapping process means establishing a chain of trust. The platform is the root of this chain of trust; in the context of SUSE Linux Enterprise Desktop, the mainboard and the on-board firmware could be considered the platform. In other words, it is the hardware vendor, and the chain of trust flows from that hardware vendor to the component manufacturers, the OS vendors, etc.

The trust is expressed via public key cryptography. The hardware vendor puts a so-called Platform Key (PK) into the firmware, representing the root of trust. The trust relationship with operating system vendors and others is documented by signing their keys with the Platform Key.

Finally, security is established by requiring that no code will be executed by the firmware unless it has been signed by one of these trusted keys—be it an OS boot loader, some driver located in the flash memory of some PCI Express card or on disk, or be it an update of the firmware itself.

To use Secure Boot, you need to have your OS loader signed with a key trusted by the firmware, and you need the OS loader to verify that the kernel it loads can be trusted.

Key Exchange Keys (KEK) can be added to the UEFI key database. This way, you can use other certificates, as long as they are signed with the private part of the PK.

12.1.1 Implementation on SUSE Linux Enterprise Desktop

Microsoft’s Key Exchange Key (KEK) is installed by default.

Note
Note: GUID Partitioning Table (GPT) Required

The Secure Boot feature is enabled by default on UEFI/x86_64 installations. You can find the Enable Secure Boot Support option in the Boot Code Options tab of the Boot Loader Settings dialog. It supports booting when the secure boot is activated in the firmware, while making it possible to boot when it is deactivated.

Secure Boot Support
Figure 12.1: Secure Boot Support

The Secure Boot feature requires that a GUID Partitioning Table (GPT) replaces the old partitioning with a Master Boot Record (MBR). If YaST detects EFI mode during the installation, it will try to create a GPT partition. UEFI expects to find the EFI programs on a FAT-formatted EFI System Partition (ESP).

Supporting UEFI Secure Boot essentially requires having a boot loader with a digital signature that the firmware recognizes as a trusted key. That key is trusted by the firmware a priori, without requiring any manual intervention.

There are two ways of getting there. One is to work with hardware vendors to have them endorse a SUSE key, which SUSE then signs the boot loader with. The other way is to go through Microsoft’s Windows Logo Certification program to have the boot loader certified and have Microsoft recognize the SUSE signing key (that is, have it signed with their KEK). By now, SUSE got the loader signed by UEFI Signing Service (that is Microsoft in this case).

UEFI: Secure Boot Process
Figure 12.2: UEFI: Secure Boot Process

At the implementation layer, SUSE uses the shim loader which is installed by default. It is a smart solution that avoids legal issues, and simplifies the certification and signing step considerably. The shim loader’s job is to load a boot loader such as GRUB 2 and verify it; this boot loader in turn will load kernels signed by a SUSE key only. SUSE provides this functionality since SLE11 SP3 on fresh installations with UEFI Secure Boot enabled.

There are two types of trusted users:

  • First, those who hold the keys. The Platform Key (PK) allows almost everything. The Key Exchange Key (KEK) allows all a PK can except changing the PK.

  • Second, anyone with physical access to the machine. A user with physical access can reboot the machine, and configure UEFI.

UEFI offers two types of variables to fulfill the needs of those users:

  • The first is the so-called Authenticated Variables, which can be updated from both within the boot process (the so-called Boot Services Environment) and the running OS. This can be done only when the new value of the variable is signed with the same key that the old value of the variable was signed with. And they can only be appended to or changed to a value with a higher serial number.

  • The second is the so-called Boot Services Only Variables. These variables are accessible to any code that runs during the boot process. After the boot process ends and before the OS starts, the boot loader must call the ExitBootServices call. After that, these variables are no longer accessible, and the OS cannot touch them.

The various UEFI key lists are of the first type, as this allows online updating, adding, and blacklisting of keys, drivers, and firmware fingerprints. It is the second type of variable, the Boot Services Only Variable, that helps to implement Secure Boot in a secure and open source-friendly manner, and thus compatible with GPLv3.

SUSE starts with shim—a small and simple EFI boot loader signed by SUSE and Microsoft.

This allows shim to load and execute.

shim then goes on to verify that the boot loader it wants to load is trusted. In a default situation shim will use an independent SUSE certificate embedded in its body. In addition, shim will allow to enroll additional keys, overriding the default SUSE key. In the following, we call them Machine Owner Keys or MOKs for short.

Next the boot loader will verify and then boot the kernel, and the kernel will do the same on the modules.

12.1.2 MOK (Machine Owner Key)

If the user (machine owner) wants to replace any components of the boot process, Machine Owner Keys (MOKs) are to be used. The mokutils tool will help with signing components and managing MOKs.

The enrollment process begins with rebooting the machine and interrupting the boot process (for example, pressing a key) when shim loads. shim will then go into enrollment mode, allowing the user to replace the default SUSE key with keys from a file on the boot partition. If the user chooses to do so, shim will then calculate a hash of that file and put the result in a Boot Services Only variable. This allows shim to detect any change of the file made outside of Boot Services and thus avoid tampering with the list of user-approved MOKs.

All of this happens during boot time—only verified code is executing now. Therefore, only a user present at the console can use the machine owner's set of keys. It cannot be malware or a hacker with remote access to the OS because hackers or malware can only change the file, but not the hash stored in the Boot Services Only variable.

The boot loader, after having been loaded and verified by shim, will call back to shim when it wants to verify the kernel—to avoid duplication of the verification code. Shim will use the same list of MOKs for this and tell the boot loader whether it can load the kernel.

This way, you can install your own kernel or boot loader. It is only necessary to install a new set of keys and authorize them by being physically present during the first reboot. Because MOKs are a list and not not a single MOK, you can make shim trust keys from several vendors, allowing dual- and multi-boot from the boot loader.

12.1.3 Booting a Custom Kernel

The following is based on http://en.opensuse.org/openSUSE:UEFI#Booting_a_custom_kernel.

Secure Boot does not prevent you from using a self-compiled kernel. You must sign it with your own certificate and make that certificate known to the firmware or MOK.

  1. Create a custom X.509 key and certificate used for signing:

    openssl req -new -x509 -newkey rsa:2048 -keyout key.asc \
      -out cert.pem -nodes -days 666 -subj "/CN=$USER/"

    For more information about creating certificates, see http://en.opensuse.org/openSUSE:UEFI_Image_File_Sign_Tools#Create_Your_Own_Certificate.

  2. Package the key and the certificate as a PKCS#12 structure:

    openssl pkcs12 -export -inkey key.asc -in cert.pem \
      -name kernel_cert -out cert.p12
  3. Generate an NSS database for use with pesign:

    certutil -d . -N
  4. Import the key and the certificate contained in PKCS#12 into the NSS database:

    pk12util -d . -i cert.p12
  5. Bless the kernel with the new signature using pesign:

    pesign -n . -c kernel_cert -i arch/x86/boot/bzImage \
      -o vmlinuz.signed -s
  6. List the signatures on the kernel image:

    pesign -n . -S -i vmlinuz.signed

    At that point, you can install the kernel in /boot as usual. Because the kernel now has a custom signature the certificate used for signing needs to be imported into the UEFI firmware or MOK.

  7. Convert the certificate to the DER format for import into the firmware or MOK:

    openssl x509 -in cert.pem -outform der -out cert.der
  8. Copy the certificate to the ESP for easier access:

    sudo cp cert.der /boot/efi/
  9. Use mokutil to launch the MOK list automatically.

      1. Import the certificate to MOK:

        mokutil --root-pw --import cert.der

        The --root-pw option enables usage of the root user directly.

      2. Check the list of certificates that are prepared to be enrolled:

        mokutil --list-new
      3. Reboot the system; shim should launch MokManager. You need to enter the root password to confirm the import of the certificate to the MOK list.

      4. Check if the newly imported key was enrolled:

        mokutil --list-enrolled
      1. Alternatively, this is the procedure if you want to launch MOK manually:

        Reboot

      2. In the GRUB 2 menu press the 'c' key.

      3. Type:

        chainloader $efibootdir/MokManager.efi
        boot
      4. Select Enroll key from disk.

      5. Navigate to the cert.der file and press Enter.

      6. Follow the instructions to enroll the key. Normally this should be pressing '0' and then 'y' to confirm.

        Alternatively, the firmware menu may provide ways to add a new key to the Signature Database.

12.1.4 Using Non-Inbox Drivers

There is no support for adding non-inbox drivers (that is, drivers that do not come with SUSE Linux Enterprise Desktop) during installation with Secure Boot enabled. The signing key used for SolidDriver/PLDP is not trusted by default.

It is possible to install third party drivers during installation with Secure Boot enabled in two different ways. In both cases:

  • Add the needed keys to the firmware database via firmware/system management tools before the installation. This option depends on the specific hardware you are using. Consult your hardware vendor for more information.

  • Use a bootable driver ISO from https://drivers.suse.com/ or your hardware vendor to enroll the needed keys in the MOK list at first boot.

To use the bootable driver ISO to enroll the driver keys to the MOK list, follow these steps:

  1. Burn the ISO image above to an empty CD/DVD medium.

  2. Start the installation using the new CD/DVD medium, having the standard installation media at hand or a URL to a network installation server.

    If doing a network installation, enter the URL of the network installation source on the boot command line using the install= option.

    If doing installation from optical media, the installer will first boot from the driver kit and then ask to insert the first installation disk of the product.

  3. An initrd containing updated drivers will be used for installation.

For more information, see https://drivers.suse.com/doc/Usage/Secure_Boot_Certificate.html.

12.1.5 Features and Limitations

When booting in Secure Boot mode, the following features apply:

  • Installation to UEFI default boot loader location, a mechanism to keep or restore the EFI boot entry.

  • Reboot via UEFI.

  • Xen hypervisor will boot with UEFI when there is no legacy BIOS to fall back to.

  • UEFI IPv6 PXE boot support.

  • UEFI videomode support, the kernel can retrieve video mode from UEFI to configure KMS mode with the same parameters.

  • UEFI booting from USB devices is supported.

When booting in Secure Boot mode, the following limitations apply:

  • To ensure that Secure Boot cannot be easily circumvented, some kernel features are disabled when running under Secure Boot.

  • Boot loader, kernel, and kernel modules must be signed.

  • Kexec and Kdump are disabled.

  • Hibernation (suspend on disk) is disabled.

  • Access to /dev/kmem and /dev/mem is not possible, not even as root user.

  • Access to the I/O port is not possible, not even as root user. All X11 graphical drivers must use a kernel driver.

  • PCI BAR access through sysfs is not possible.

  • custom_method in ACPI is not available.

  • debugfs for asus-wmi module is not available.

  • the acpi_rsdp parameter does not have any effect on the kernel.

12.2 For More Information

13 The Boot Loader GRUB 2

  • Filename: grub2.xml
  • ID: cha.grub2
Abstract

This chapter describes how to configure GRUB 2, the boot loader used in SUSE® Linux Enterprise Desktop. It is the successor to the traditional GRUB boot loader—now called GRUB Legacy. GRUB 2 has been the default boot loader in SUSE® Linux Enterprise Desktop since version 12. A YaST module is available for configuring the most important settings. The boot procedure as a whole is outlined in Chapter 11, Introduction to the Booting Process. For details on Secure Boot support for UEFI machines, see Chapter 12, UEFI (Unified Extensible Firmware Interface).

13.1 Main Differences between GRUB Legacy and GRUB 2

  • The configuration is stored in different files.

  • More file systems are supported (for example, Btrfs).

  • Can directly read files stored on LVM or RAID devices.

  • The user interface can be translated and altered with themes.

  • Includes a mechanism for loading modules to support additional features, such as file systems, etc.

  • Automatically searches for and generates boot entries for other kernels and operating systems, such as Windows.

  • Includes a minimal Bash-like console.

13.2 Configuration File Structure

The configuration of GRUB 2 is based on the following files:

/boot/grub2/grub.cfg

This file contains the configuration of the GRUB 2 menu items. It replaces menu.lst used in GRUB Legacy. grub.cfg is automatically generated by the grub2-mkconfig command, and should not be edited.

/boot/grub2/custom.cfg

This optional file is directly sourced by grub.cfg at boot time and can be used to add custom items to the boot menu. Starting with SUSE Linux Enterprise Desktop these entries will also be parsed when using grub-once.

/etc/default/grub

This file controls the user settings of GRUB 2 and usually includes additional environmental settings such as backgrounds and themes.

Scripts under /etc/grub.d/

The scripts in this directory are read during execution of the grub2-mkconfig command. Their instructions are integrated into the main configuration file /boot/grub/grub.cfg.

/etc/sysconfig/bootloader

This configuration file is used when configuring the boot loader with YaST and every time a new kernel is installed. It is evaluated by the perl-bootloader which modifies the boot loader configuration file (for example /boot/grub2/grub.cfg for GRUB 2) accordingly. /etc/sysconfig/bootloader is not a GRUB 2-specific configuration file—the values are applied to any boot loader installed on SUSE Linux Enterprise Desktop.

/boot/grub2/x86_64-efi, /boot/grub2/power-ieee1275, /boot/grub2/s390x

These configuration files contain architecture-specific options.

GRUB 2 can be controlled in various ways. Boot entries from an existing configuration can be selected from the graphical menu (splash screen). The configuration is loaded from the file /boot/grub2/grub.cfg which is compiled from other configuration files (see below). All GRUB 2 configuration files are considered system files, and you need root privileges to edit them.

Note
Note: Activating Configuration Changes

After having manually edited GRUB 2 configuration files, you need to run grub2-mkconfig to activate the changes. However, this is not necessary when changing the configuration with YaST, since it will automatically run grub2-mkconfig .

13.2.1 The File /boot/grub2/grub.cfg

The graphical splash screen with the boot menu is based on the GRUB 2 configuration file /boot/grub2/grub.cfg, which contains information about all partitions or operating systems that can be booted by the menu.

Every time the system is booted, GRUB 2 loads the menu file directly from the file system. For this reason, GRUB 2 does not need to be re-installed after changes to the configuration file. grub.cfg is automatically rebuilt with kernel installations or removals.

grub.cfg is compiled by the grub2-mkconfig from the file /etc/default/grub and scripts found in the /etc/grub.d/ directory. Therefore you should never edit the file manually. Instead, edit the related source files or use the YaST Boot Loader module to modify the configuration as described in Section 13.3, “Configuring the Boot Loader with YaST”.

13.2.2 The File /etc/default/grub

More general options of GRUB 2 belong here, such as the time the menu is displayed, or the default OS to boot. To list all available options, see the output of the following command:

grep "export GRUB_DEFAULT" -A50 /usr/sbin/grub2-mkconfig | grep GRUB_

In addition to already defined variables, the user may introduce their own variables, and use them later in the scripts found in the /etc/grub.d directory.

After having edited /etc/default/grub, run grub2-mkconfig to update the main configuration file.

Note
Note: Scope

All options set in this file are general options that affect all boot entries. Specific options for Xen kernels or the Xen hypervisor can be set via the GRUB_*_XEN_* configuration options. See below for details.

GRUB_DEFAULT

Sets the boot menu entry that is booted by default. Its value can be a numeric value, the complete name of a menu entry, or saved.

GRUB_DEFAULT=2 boots the third (counted from zero) boot menu entry.

GRUB_DEFAULT="2>0" boots the first submenu entry of the third top-level menu entry.

GRUB_DEFAULT="Example boot menu entry" boots the menu entry with the title Example boot menu entry.

GRUB_DEFAULT=saved boots the entry specified by the grub2-once or grub2-set-default commands. While grub2-reboot sets the default boot entry for the next reboot only, grub2-set-default sets the default boot entry until changed. grub2-editenv list lists next boot entry.

GRUB_HIDDEN_TIMEOUT

Waits the specified number of seconds for the user to press a key. During the period no menu is shown unless the user presses a key. If no key is pressed during the time specified, the control is passed to GRUB_TIMEOUT. GRUB_HIDDEN_TIMEOUT=0 first checks whether Shift is pressed and shows the boot menu if yes, otherwise immediately boots the default menu entry. This is the default when only one bootable OS is identified by GRUB 2.

GRUB_HIDDEN_TIMEOUT_QUIET

If false is specified, a countdown timer is displayed on a blank screen when the GRUB_HIDDEN_TIMEOUT feature is active.

GRUB_TIMEOUT

Time period in seconds the boot menu is displayed before automatically booting the default boot entry. If you press a key, the timeout is cancelled and GRUB 2 waits for you to make the selection manually. GRUB_TIMEOUT=-1 will cause the menu to be displayed until you select the boot entry manually.

GRUB_CMDLINE_LINUX

Entries on this line are added at the end of the boot entries for normal and recovery mode. Use it to add kernel parameters to the boot entry.

GRUB_CMDLINE_LINUX_DEFAULT

Same as GRUB_CMDLINE_LINUX but the entries are appended in the normal mode only.

GRUB_CMDLINE_LINUX_RECOVERY

Same as GRUB_CMDLINE_LINUX but the entries are appended in the recovery mode only.

GRUB_CMDLINE_LINUX_XEN_REPLACE

This entry will completely replace the GRUB_CMDLINE_LINUX parameters for all Xen boot entries.

GRUB_CMDLINE_LINUX_XEN_REPLACE_DEFAULT

Same as GRUB_CMDLINE_LINUX_XEN_REPLACE but it will only replace parameters ofGRUB_CMDLINE_LINUX_DEFAULT.

GRUB_CMDLINE_XEN

This entry specifies the kernel parameters for the Xen guest kernel only—the operation principle is the same as for GRUB_CMDLINE_LINUX.

GRUB_CMDLINE_XEN_DEFAULT

Same as GRUB_CMDLINE_XEN—the operation principle is the same as for GRUB_CMDLINE_LINUX_DEFAULT.

GRUB_TERMINAL

Enables and specifies an input/output terminal device. Can be console (PC BIOS and EFI consoles), serial (serial terminal), ofconsole (Open Firmware console), or the default gfxterm (graphics-mode output). It is also possible to enable more than one device by quoting the required options, for example GRUB_TERMINAL="console serial".

GRUB_GFXMODE

The resolution used for the gfxterm graphical terminal. Note that you can only use modes supported by your graphics card (VBE). The default is ‘auto’, which tries to select a preferred resolution. You can display the screen resolutions available to GRUB 2 by typing videoinfo in the GRUB 2 command line. The command line is accessed by typing C when the GRUB 2 boot menu screen is displayed.

You can also specify a color depth by appending it to the resolution setting, for example GRUB_GFXMODE=1280x1024x24.

GRUB_BACKGROUND

Set a background image for the gfxterm graphical terminal. The image must be a file readable by GRUB 2 at boot time, and it must end with the .png, .tga, .jpg, or .jpeg suffix. If necessary, the image will be scaled to fit the screen.

GRUB_DISABLE_OS_PROBER

If this option is set to true, automatic searching for other operating systems is disabled. Only the kernel images in /boot/ and the options from your own scripts in /etc/grub.d/ are detected.

SUSE_BTRFS_SNAPSHOT_BOOTING

If this option is set to true, GRUB 2 can boot directly into Snapper snapshots. For more information, see Section 7.3, “System Rollback by Booting from Snapshots”.

For a complete list of options, see the GNU GRUB manual. For a complete list of possible parameters, see http://en.opensuse.org/Linuxrc.

13.2.3 Scripts in /etc/grub.d

The scripts in this directory are read during execution of the grub2-mkconfig command, and their instructions are incorporated into /boot/grub2/grub.cfg. The order of menu items in grub.cfg is determined by the order in which the files in this directory are run. Files with a leading numeral are executed first, beginning with the lowest number. 00_header is run before 10_linux, which would run before 40_custom. If files with alphabetic names are present, they are executed after the numerically-named files. Only executable files generate output to grub.cfg during execution of grub2-mkconfig. By default all files in the /etc/grub.d directory are executable.

Tip
Tip: Persistent Custom Content in grub.cfg

Because /boot/grub2/grub.cfg is recompiled each time grub2-mkconfig is run, any custom content is lost. If you want to insert your lines directly into /boot/grub2/grub.cfg without losing them after grub2-mkconfig is run, insert it between

### BEGIN /etc/grub.d/90_persistent ###

and

### END /etc/grub.d/90_persistent ###

lines. The 90_persistent script ensures that such content will be preserved.

A list of the most important scripts follows:

00_header

Sets environmental variables such as system file locations, display settings, themes, and previously saved entries. It also imports preferences stored in the /etc/default/grub. Normally you do not need to make changes to this file.

10_linux

Identifies Linux kernels on the root device and creates relevant menu entries. This includes the associated recovery mode option if enabled. Only the latest kernel is displayed on the main menu page, with additional kernels included in a submenu.

30_os-prober

This script uses OS-prober to search for Linux and other operating systems and places the results in the GRUB 2 menu. There are sections to identify specific other operating systems, such as Windows or macOS.

40_custom

This file provides a simple way to include custom boot entries into grub.cfg. Make sure that you do not change the exec tail -n +3 $0 part at the beginning.

The processing sequence is set by the preceding numbers with the lowest number being executed first. If scripts are preceded by the same number the alphabetical order of the complete name decides the order.

Tip
Tip: /boot/grub2/custom.cfg

If you create /boot/grub2/custom.cfg and fill it with a custom content, it will be automatically included into /boot/grub2/grub.cfg at boot time.

13.2.4 Mapping between BIOS Drives and Linux Devices

In GRUB Legacy, the device.map configuration file was used to derive Linux device names from BIOS drive numbers. The mapping between BIOS drives and Linux devices cannot always be guessed correctly. For example, GRUB Legacy would get a wrong order if the boot sequence of IDE and SCSI drives is exchanged in the BIOS configuration.

GRUB 2 avoids this problem by using device ID strings (UUIDs) or file system labels when generating grub.cfg. GRUB 2 utilities create a temporary device map on the fly, which is usually sufficient, particularly in the case of single-disk systems.

However, if you need to override the GRUB 2's automatic device mapping mechanism, create your custom mapping file /boot/grub2/device.map. The following example changes the mapping to make DISK 3 the boot disk. Note that GRUB 2 partition numbers start with 1 and not with 0 as in GRUB Legacy.

(hd1)  /dev/disk-by-id/DISK3 ID
(hd2)  /dev/disk-by-id/DISK1 ID
(hd3)  /dev/disk-by-id/DISK2 ID

13.2.5 Editing Menu Entries during the Boot Procedure

Being able to directly edit menu entries is useful when the system does not boot anymore because of a faulty configuration. It can also be used to test new settings without altering the system configuration.

  1. In the graphical boot menu, select the entry you want to edit with the arrow keys.

  2. Press E to open the text-based editor.

  3. Use the arrow keys to move to the line you want to edit.

    GRUB 2 Boot Editor
    Figure 13.1: GRUB 2 Boot Editor

    Now you have two options:

    1. Add space-separated parameters to the end of the line starting with linux or linuxefi to edit the kernel parameters. A complete list of parameters is available at http://en.opensuse.org/Linuxrc.

    2. Or edit the general options to change for example the kernel version. The →| key suggests all possible completions.

  4. Press F10 to boot the system with the changes you made or press Esc to discard your edits and return to the GRUB 2 menu.

Changes made this way only apply to the current boot process and are not saved permanently.

Important
Important: Keyboard Layout During the Boot Procedure

The US keyboard layout is the only one available when booting. See Figure 34.2, “US Keyboard Layout”.

Note
Note: Boot Loader on the Installation Media

The Boot Loader of the installation media on systems with a traditional BIOS is still GRUB Legacy. To add boot options, select an entry and start typing. Additions you make to the installation boot entry will be permanently saved in the installed system.

Note
Note: Editing GRUB 2 Menu Entries on z Systems

Cursor movement and editing commands on IBM z Systems differ—see Section 13.4, “Differences in Terminal Usage on z Systems” for details.

13.2.6 Setting a Boot Password

Even before the operating system is booted, GRUB 2 enables access to file systems. Users without root permissions can access files in your Linux system to which they have no access after the system is booted. To block this kind of access or to prevent users from booting certain menu entries, set a boot password.

Important
Important: Booting Requires Password

If set, the boot password is required on every boot, which means the system does not boot automatically.

Proceed as follows to set a boot password. Alternatively use YaST (Protect Boot Loader with Password ).

  1. Encrypt the password using grub2-mkpasswd-pbkdf2:

    tux >  sudo grub2-mkpasswd-pbkdf2
    Password: ****
    Reenter password: ****
    PBKDF2 hash of your password is grub.pbkdf2.sha512.10000.9CA4611006FE96BC77A...
  2. Paste the resulting string into the file /etc/grub.d/40_custom together with the set superusers command.

    set superusers="root"
    password_pbkdf2 root grub.pbkdf2.sha512.10000.9CA4611006FE96BC77A...
  3. Run grub2-mkconfig to import the changes into the main configuration file.

After you reboot, you will be prompted for a user name and a password when trying to boot a menu entry. Enter root and the password you typed during the grub2-mkpasswd-pbkdf2 command. If the credentials are correct, the system will boot the selected boot entry.

For more information, see https://www.gnu.org/software/grub/manual/grub.html#Security.

13.3 Configuring the Boot Loader with YaST

  • Filename: grub2_yast_i.xml
  • ID: sec.grub2.yast2.config

The easiest way to configure general options of the boot loader in your SUSE Linux Enterprise Desktop system is to use the YaST module. In the YaST Control Center, select System › Boot Loader. The module shows the current boot loader configuration of your system and allows you to make changes.

Use the Boot Code Options tab to view and change settings related to type, location and advanced loader settings. You can choose whether to use GRUB 2 in standard or EFI mode.

Boot Code Options
Figure 13.2: Boot Code Options
Important
Important: EFI Systems require GRUB2-EFI

If you have an EFI system you can only install GRUB2-EFI, otherwise your system is no longer bootable.

Important
Important: Reinstalling the Boot Loader

To reinstall the boot loader, make sure to change a setting in YaST and then change it back. For example, to reinstall GRUB2-EFI, select GRUB2 first and then immediately switch back to GRUB2-EFI.

Otherwise, the boot loader may only be partially reinstalled.

Note
Note: Custom Boot Loader

To use a boot loader other than the ones listed, select Do Not Install Any Boot Loader. Read the documentation of your boot loader carefully before choosing this option.

13.3.1 Boot Loader Location and Boot Code Options

The default location of the boot loader depends on the partition setup and is either the Master Boot Record (MBR) or the boot sector of the / partition. To modify the location of the boot loader, follow these steps:

Procedure 13.1: Changing the Boot Loader Location
  1. Select the Boot Code Options tab and then choose one of the following options for Boot Loader Location:

    Boot from Master Boot Record

    This installs the boot loader in the MBR of the disk containing the directory /boot. Usually this will be the disk mounted to /, but if /boot is mounted to a separate partition on a different disk, the MBR of that disk will be used.

    Boot from Root Partition

    This installs the boot loader in the boot sector of the / partition.

    Custom Boot Partition

    Use this option to specify the location of the boot loader manually.

  2. Click OK to apply your changes.

Code Options
Figure 13.3: Code Options

The Boot Code Options tab includes the following additional options:

Set Active Flag in Partition Table for Boot Partition

Activates the partition that contains the /boot directorythe PReP partition. Use this option on systems with old BIOS and/or legacy operating systems because they may fail to boot from a non-active partition. It is safe to leave this option active.

Write Generic Boot Code to MBR

If MBR contains a custom 'non-GRUB' code, this option replaces it with a generic, operating system independent code. If you deactivate this option, the system may become unbootable.

Enable Trusted Boot Support

Starts TrustedGRUB2 which supports trusted computing functionality (Trusted Platform Module (TPM)). For more information refer to https://github.com/Sirrix-AG/TrustedGRUB2.

13.3.2 Adjusting the Disk Order

If your computer has more than one hard disk, you can specify the boot sequence of the disks. The first disk in the list is where GRUB 2 will be installed in the case of booting from MBR. It is the disk where SUSE Linux Enterprise Desktop is installed by default. The rest of the list is a hint for GRUB 2's device mapper (see Section 13.2.4, “Mapping between BIOS Drives and Linux Devices”).

Warning
Warning: Unbootable System

The default value is usually valid for almost all deployments. If you change the boot order of disks wrongly, the system may become unbootable on the next reboot. For example, if the first disk in the list is not part of BIOS boot order, and the other disk in the list have empty MBRs.

Procedure 13.2: Setting the Disk Order
  1. Open the Boot Code Options tab.

  2. Click Edit Disk Boot Order.

  3. If more than one disk is listed, select a disk and click Up or Down to reorder the displayed disks.

  4. Click OK two times to save the changes.

13.3.3 Configuring Advanced Options

Advanced boot options can be configured via the Boot Loader Options tab.

13.3.3.1 Boot Loader Options Tab

Boot loader Options
Figure 13.4: Boot loader Options
Boot Loader Time-Out

Change the value of Time-Out in Seconds by typing in a new value and clicking the appropriate arrow key with your mouse.

Probe Foreign OS

When selected, the boot loader searches for other systems like Windows or other Linux installations.

Hide Menu on Boot

Hides the boot menu and boots the default entry.

Adjusting the Default Boot Entry

Select the desired entry from the Default Boot Section list. Note that the > sign in the boot entry name delimits the boot section and its subsection.

Protect Boot Loader with Password

Protects the boot loader and the system with an additional password. For more information, see Section 13.2.6, “Setting a Boot Password”.

13.3.3.2 Kernel Parameters Tab

Kernel Parameters
Figure 13.5: Kernel Parameters
Console resolution

The Console resolution option specifies the default screen resolution during the boot process.

Kernel Command Line Parameter

The optional kernel parameters are added at the end of the default parameters. For a list of all possible parameters, see http://en.opensuse.org/Linuxrc.

Use graphical console

When checked, the boot menu appears on a graphical splash screen rather than in a text mode. The resolution of the boot screen can be then set from the Console resolution list, and graphical theme definition file can be specified with the Console theme file-chooser.

Use Serial Console

If your machine is controlled via a serial console, activate this option and specify which COM port to use at which speed. See info grub or http://www.gnu.org/software/grub/manual/grub.html#Serial-terminal

13.4 Differences in Terminal Usage on z Systems

On 3215 and 3270 terminals there are some differences and limitations on how to move the cursor and how to issue editing commands within GRUB 2.

13.4.1 Limitations

Interactivity

Interactivity is strongly limited. Typing often does not result in visual feedback. To see where the cursor is, type an underscore (_).

Note
Note: 3270 Compared to 3215

The 3270 terminal is much better at displaying and refreshing screens than the 3215 terminal.

Cursor Movement

Traditional cursor movement is not possible. Alt, Meta, Ctrl and the cursor keys do not work. To move the cursor, use the key combinations listed in Section 13.4.2, “Key Combinations”.

Caret

The caret (^) is used as a control character. To type a literal ^ followed by a letter, type ^, ^, LETTER.

Enter

The Enter key does not work, use ^J instead.

13.4.2 Key Combinations

Common Substitutes:

^J

engage (Enter)

^L

abort, return to previous state

^I

tab completion (in edit and shell mode)

Keys Available in Menu Mode:

^A

first entry

^E

last entry

^P

previous entry

^N

next entry

^G

previous page

^C

next page

^F

boot selected entry or enter submenu (same as ^J)

E

edit selected entry

C

enter GRUB-Shell

Keys Available in Edit Mode:

^P

previous line

^N

next line

^B

backward char

^F

forward char

^A

beginning of line

^E

end of line

^H

backspace

^D

delete

^K

kill line

^Y

yank

^O

open line

^L

refresh screen

^X

boot entry

^C

enter GRUB-Shell

Keys Available in Command Line Mode:

^P

previous command

^N

next command from history

^A

beginning of line

^E

end of line

^B

backward char

^F

forward char

^H

backspace

^D

delete

^K

kill line

^U

discard line

^Y

yank

13.5 Helpful GRUB 2 Commands

grub2-mkconfig

Generates a new /boot/grub2/grub.cfg based on /etc/default/grub and the scripts from /etc/grub.d/.

Example 13.1: Usage of grub2-mkconfig
grub2-mkconfig -o /boot/grub2/grub.cfg
Tip
Tip: Syntax Check

Running grub2-mkconfig without any parameters prints the configuration to STDOUT where it can be reviewed. Use grub2-script-check after /boot/grub2/grub.cfg has been written to check its syntax.

Important
Important: grub2-mkconfig Cannot Repair UEFI Secure Boot Tables

If you are using UEFI Secure Boot and your system is not reaching GRUB 2 correctly anymore, you may need to additionally reinstall Shim and regenerate the UEFI boot table. To do so, use:

root # shim-install --config-file=/boot/grub2/grub.cfg
grub2-mkrescue

Creates a bootable rescue image of your installed GRUB 2 configuration.

Example 13.2: Usage of grub2-mkrescue
grub2-mkrescue -o save_path/name.iso iso
grub2-script-check

Checks the given file for syntax errors.

Example 13.3: Usage of grub2-script-check
grub2-script-check /boot/grub2/grub.cfg
grub2-once

Set the default boot entry for the next boot only. To get the list of available boot entries use the --list option.

Example 13.4: Usage of grub2-once
grub2-once number_of_the_boot_entry
Tip
Tip: grub2-once Help

Call the program without any option to get a full list of all possible options.

13.6 More Information

Extensive information about GRUB 2 is available at http://www.gnu.org/software/grub/. Also refer to the grub info page. You can also search for the keyword GRUB 2 in the Technical Information Search at http://www.suse.com/support to get information about special issues.

14 The systemd Daemon

  • Filename: systemd.xml
  • ID: cha.systemd

The program systemd is the process with process ID 1. It is responsible for initializing the system in the required way. systemd is started directly by the kernel and resists signal 9, which normally terminates processes. All other programs are either started directly by systemd or by one of its child processes.

Starting with SUSE Linux Enterprise Desktop 12 systemd is a replacement for the popular System V init daemon. systemd is fully compatible with System V init (by supporting init scripts). One of the main advantages of systemd is that it considerably speeds up boot time by aggressively paralleling service starts. Furthermore, systemd only starts a service when it is really needed. Daemons are not started unconditionally at boot time, but rather when being required for the first time. systemd also supports Kernel Control Groups (cgroups), snapshotting and restoring the system state and more. See http://www.freedesktop.org/wiki/Software/systemd/ for details.

14.1 The systemd Concept

This section will go into detail about the concept behind systemd.

14.1.1 What Is systemd

systemd is a system and session manager for Linux, compatible with System V and LSB init scripts. The main features are:

  • provides aggressive parallelization capabilities

  • uses socket and D-Bus activation for starting services

  • offers on-demand starting of daemons

  • keeps track of processes using Linux cgroups

  • supports snapshotting and restoring of the system state

  • maintains mount and automount points

  • implements an elaborate transactional dependency-based service control logic

14.1.2 Unit File

A unit configuration file contains information about a service, a socket, a device, a mount point, an automount point, a swap file or partition, a start-up target, a watched file system path, a timer controlled and supervised by systemd, a temporary system state snapshot, a resource management slice or a group of externally created processes. Unit file is a generic term used by systemd for the following:

  • Service.  Information about a process (for example running a daemon); file ends with .service

  • Targets.  Used for grouping units and as synchronization points during start-up; file ends with .target

  • Sockets.  Information about an IPC or network socket or a file system FIFO, for socket-based activation (like inetd); file ends with .socket

  • Path.  Used to trigger other units (for example running a service when files change); file ends with .path

  • Timer.  Information about a timer controlled, for timer-based activation; file ends with .timer

  • Mount point.  Usually auto-generated by the fstab generator; file ends with .mount

  • Automount point.  Information about a file system automount point; file ends with .automount

  • Swap.  Information about a swap device or file for memory paging; file ends with .swap

  • Device.  Information about a device unit as exposed in the sysfs/udev(7) device tree; file ends with .device

  • Scope / Slice.  A concept for hierarchically managing resources of a group of processes; file ends with .scope/.slice

For more information about systemd.unit see http://www.freedesktop.org/software/systemd/man/systemd.unit.html

14.2 Basic Usage

The System V init system uses several commands to handle services—the init scripts, insserv, telinit and others. systemd makes it easier to manage services, since there is only one command to memorize for the majority of service-handling tasks: systemctl. It uses the command plus subcommand notation like git or zypper:

systemctl GENERAL OPTIONS SUBCOMMAND SUBCOMMAND OPTIONS

See man 1 systemctl for a complete manual.

Tip
Tip: Terminal Output and Bash Completion

If the output goes to a terminal (and not to a pipe or a file, for example) systemd commands send long output to a pager by default. Use the --no-pager option to turn off paging mode.

systemd also supports bash-completion, allowing you to enter the first letters of a subcommand and then press →| to automatically complete it. This feature is only available in the bash shell and requires the installation of the package bash-completion.

14.2.1 Managing Services in a Running System

Subcommands for managing services are the same as for managing a service with System V init (start, stop, ...). The general syntax for service management commands is as follows:

systemd
systemctl reload|restart|start|status|stop|... MY_SERVICE(S)
System V init
rcMY_SERVICE(S) reload|restart|start|status|stop|...

systemd allows you to manage several services in one go. Instead of executing init scripts one after the other as with System V init, execute a command like the following:

systemctl start MY_1ST_SERVICE MY_2ND_SERVICE

To list all services available on the system:

systemctl list-unit-files --type=service

The following table lists the most important service management commands for systemd and System V init:

Table 14.1: Service Management Commands

Task

systemd Command

System V init Command

Starting. 

start
start

Stopping. 

stop
stop

Restarting.  Shuts down services and starts them afterward. If a service is not yet running it will be started.

restart
restart

Restarting conditionally.  Restarts services if they are currently running. Does nothing for services that are not running.

try-restart
try-restart

Reloading.  Tells services to reload their configuration files without interrupting operation. Use case: Tell Apache to reload a modified httpd.conf configuration file. Note that not all services support reloading.

reload
reload

Reloading or restarting.  Reloads services if reloading is supported, otherwise restarts them. If a service is not yet running it will be started.

reload-or-restart
n/a

Reloading or restarting conditionally.  Reloads services if reloading is supported, otherwise restarts them if currently running. Does nothing for services that are not running.

reload-or-try-restart
n/a

Getting detailed status information.  Lists information about the status of services. The systemd command shows details such as description, executable, status, cgroup, and messages last issued by a service (see Section 14.6.8, “Debugging Services”). The level of details displayed with the System V init differs from service to service.

status
status

Getting short status information.  Shows whether services are active or not.

is-active
status

14.2.2 Permanently Enabling/Disabling Services

The service management commands mentioned in the previous section let you manipulate services for the current session. systemd also lets you permanently enable or disable services, so they are automatically started when requested or are always unavailable. You can either do this by using YaST, or on the command line.

14.2.2.1 Enabling/Disabling Services on the Command Line

The following table lists enabling and disabling commands for systemd and System V init:

Important
Important: Service Start

When enabling a service on the command line, it is not started automatically. It is scheduled to be started with the next system start-up or runlevel/target change. To immediately start a service after having enabled it, explicitly run systemctl start MY_SERVICE or rc MY_SERVICE start.

Table 14.2: Commands for Enabling and Disabling Services

Task

systemd Command

System V init Command

Enabling. 

systemctl enable MY_SERVICE(S)

insserv MY_SERVICE(S), chkconfig -a MY_SERVICE(S)

Disabling. 

systemctl disable MY_SERVICE(S).service

insserv -r MY_SERVICE(S), chkconfig -d MY_SERVICE(S)

Checking.  Shows whether a service is enabled or not.

systemctl is-enabled MY_SERVICE

chkconfig MY_SERVICE

Re-enabling.  Similar to restarting a service, this command first disables and then enables a service. Useful to re-enable a service with its defaults.

systemctl reenable MY_SERVICE

n/a

Masking.  After disabling a service, it can still be started manually. To completely disable a service, you need to mask it. Use with care.

systemctl mask MY_SERVICE

n/a

Unmasking.  A service that has been masked can only be used again after it has been unmasked.

systemctl unmask MY_SERVICE

n/a

14.3 System Start and Target Management

The entire process of starting the system and shutting it down is maintained by systemd. From this point of view, the kernel can be considered a background process to maintain all other processes and adjust CPU time and hardware access according to requests from other programs.

14.3.1 Targets Compared to Runlevels

With System V init the system was booted into a so-called Runlevel. A runlevel defines how the system is started and what services are available in the running system. Runlevels are numbered; the most commonly known ones are 0 (shutting down the system), 3 (multiuser with network) and 5 (multiuser with network and display manager).

systemd introduces a new concept by using so-called target units. However, it remains fully compatible with the runlevel concept. Target units are named rather than numbered and serve specific purposes. For example, the targets local-fs.target and swap.target mount local file systems and swap spaces.

The target graphical.target provides a multiuser system with network and display manager capabilities and is equivalent to runlevel 5. Complex targets, such as graphical.target act as meta targets by combining a subset of other targets. Since systemd makes it easy to create custom targets by combining existing targets, it offers great flexibility.

The following list shows the most important systemd target units. For a full list refer to man 7 systemd.special.

Selected systemd Target Units
default.target

The target that is booted by default. Not a real target, but rather a symbolic link to another target like graphic.target. Can be permanently changed via YaST (see Section 14.4, “Managing Services with YaST”). To change it for a session, use the kernel parameter systemd.unit=MY_TARGET.target at the boot prompt.

emergency.target

Starts an emergency shell on the console. Only use it at the boot prompt as systemd.unit=emergency.target.

graphical.target

Starts a system with network, multiuser support and a display manager.

halt.target

Shuts down the system.

mail-transfer-agent.target

Starts all services necessary for sending and receiving mails.

multi-user.target

Starts a multiuser system with network.

reboot.target

Reboots the system.

rescue.target

Starts a single-user system without network.

To remain compatible with the System V init runlevel system, systemd provides special targets named runlevelX.target mapping the corresponding runlevels numbered X.

If you want to know the current target, use the command: systemctl get-default

Table 14.3: System V Runlevels and systemd Target Units

System V runlevel

systemd target

Purpose

0

runlevel0.target, halt.target, poweroff.target

System shutdown

1, S

runlevel1.target, rescue.target,

Single-user mode

2

runlevel2.target, multi-user.target,

Local multiuser without remote network

3

runlevel3.target, multi-user.target,

Full multiuser with network

4

runlevel4.target

Unused/User-defined

5

runlevel5.target, graphical.target,

Full multiuser with network and display manager

6

runlevel6.target, reboot.target,

System reboot

Important
Important: systemd Ignores /etc/inittab

The runlevels in a System V init system are configured in /etc/inittab. systemd does not use this configuration. Refer to Section 14.5.3, “Creating Custom Targets” for instructions on how to create your own bootable target.

14.3.1.1 Commands to Change Targets

Use the following commands to operate with target units:

Task

systemd Command

System V init Command

Change the current target/runlevel

systemctl isolate MY_TARGET.target

telinit X

Change to the default target/runlevel

systemctl default

n/a

Get the current target/runlevel

systemctl list-units --type=target

With systemd there is usually more than one active target. The command lists all currently active targets.

who -r

or

runlevel

persistently change the default runlevel

Use the Services Manager or run the following command:

ln -sf /usr/lib/systemd/system/ MY_TARGET.target /etc/systemd/system/default.target

Use the Services Manager or change the line

id: X:initdefault:

in /etc/inittab

Change the default runlevel for the current boot process

Enter the following option at the boot prompt

systemd.unit= MY_TARGET.target

Enter the desired runlevel number at the boot prompt.

Show a target's/runlevel's dependencies

systemctl show -p "Requires" MY_TARGET.target

systemctl show -p "Wants" MY_TARGET.target

Requires lists the hard dependencies (the ones that must be resolved), whereas Wants lists the soft dependencies (the ones that get resolved if possible).

n/a

14.3.2 Debugging System Start-Up

systemd offers the means to analyze the system start-up process. You can review the list of all services and their status (rather than having to parse /varlog/). systemd also allows you to scan the start-up procedure to find out how much time each service start-up consumes.

14.3.2.1 Review Start-Up of Services

To review the complete list of services that have been started since booting the system, enter the command systemctl. It lists all active services like shown below (shortened). To get more information on a specific service, use systemctl status MY_SERVICE.

Example 14.1: List Active Services
root # systemctl
UNIT                        LOAD   ACTIVE SUB       JOB DESCRIPTION
[...]
iscsi.service               loaded active exited    Login and scanning of iSC+
kmod-static-nodes.service   loaded active exited    Create list of required s+
libvirtd.service            loaded active running   Virtualization daemon
nscd.service                loaded active running   Name Service Cache Daemon
ntpd.service                loaded active running   NTP Server Daemon
polkit.service              loaded active running   Authorization Manager
postfix.service             loaded active running   Postfix Mail Transport Ag+
rc-local.service            loaded active exited    /etc/init.d/boot.local Co+
rsyslog.service             loaded active running   System Logging Service
[...]
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

161 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

To restrict the output to services that failed to start, use the --failed option:

Example 14.2: List Failed Services
root # systemctl --failed
UNIT                   LOAD   ACTIVE SUB    JOB DESCRIPTION
apache2.service        loaded failed failed     apache
NetworkManager.service loaded failed failed     Network Manager
plymouth-start.service loaded failed failed     Show Plymouth Boot Screen

[...]

14.3.2.2 Debug Start-Up Time

To debug system start-up time, systemd offers the systemd-analyze command. It shows the total start-up time, a list of services ordered by start-up time and can also generate an SVG graphic showing the time services took to start in relation to the other services.

Listing the System Start-Up Time
root # systemd-analyze
Startup finished in 2666ms (kernel) + 21961ms (userspace) = 24628ms
Listing the Services Start-Up Time
root # systemd-analyze blame
  6472ms systemd-modules-load.service
  5833ms remount-rootfs.service
  4597ms network.service
  4254ms systemd-vconsole-setup.service
  4096ms postfix.service
  2998ms xdm.service
  2483ms localnet.service
  2470ms SuSEfirewall2_init.service
  2189ms avahi-daemon.service
  2120ms systemd-logind.service
  1210ms xinetd.service
  1080ms ntp.service
[...]
    75ms fbset.service
    72ms purge-kernels.service
    47ms dev-vda1.swap
    38ms bluez-coldplug.service
    35ms splash_early.service
Services Start-Up Time Graphics
root # systemd-analyze plot > jupiter.example.com-startup.svg

14.3.2.3 Review the Complete Start-Up Process

The above-mentioned commands let you review the services that started and the time it took to start them. If you need to know more details, you can tell systemd to verbosely log the complete start-up procedure by entering the following parameters at the boot prompt:

systemd.log_level=debug systemd.log_target=kmsg

Now systemd writes its log messages into the kernel ring buffer. View that buffer with dmesg:

dmesg -T | less

14.3.3 System V Compatibility

systemd is compatible with System V, allowing you to still use existing System V init scripts. However, there is at least one known issue where a System V init script does not work with systemd out of the box: starting a service as a different user via su or sudo in init scripts will result in a failure of the script, producing an Access denied error.

When changing the user with su or sudo, a PAM session is started. This session will be terminated after the init script is finished. As a consequence, the service that has been started by the init script will also be terminated. To work around this error, proceed as follows:

  1. Create a service file wrapper with the same name as the init script plus the file name extension .service:

    [Unit]
    Description=DESCRIPTION
    After=network.target
    
    [Service]
    User=USER
    Type=forking1
    PIDFile=PATH TO PID FILE1
    ExecStart=PATH TO INIT SCRIPT start
    ExecStop=PATH TO INIT SCRIPT stop
    ExecStopPost=/usr/bin/rm -f PATH TO PID FILE1
    
    [Install]
    WantedBy=multi-user.target2

    Replace all values written in UPPERCASE LETTERS with appropriate values.

    1

    Optional—only use if the init script starts a daemon.

    2

    multi-user.target also starts the init script when booting into graphical.target. If it should only be started when booting into the display manager, user graphical.target here.

  2. Start the daemon with systemctl start APPLICATION.

14.4 Managing Services with YaST

Basic service management can also be done with the YaST Services Manager module. It supports starting, stopping, enabling and disabling services. It also lets you show a service's status and change the default target. Start the YaST module with YaST › System › Services Manager.

Services Manager
Figure 14.1: Services Manager
Changing the Default System Target

To change the target the system boots into, choose a target from the Default System Target drop-down box. The most often used targets are Graphical Interface (starting a graphical login screen) and Multi-User (starting the system in command line mode).

Starting or Stopping a Service

Select a service from the table. The Active column shows whether it is currently running (Active) or not (Inactive). Toggle its status by choosing Start/Stop.

Starting or stopping a service changes its status for the currently running session. To change its status throughout a reboot, you need to enable or disable it.

Enabling or Disabling a Service

Select a service from the table. The Enabled column shows whether it is currently Enabled or Disabled. Toggle its status by choosing Enable/Disable.

By enabling or disabling a service you configure whether it is started during booting (Enabled) or not (Disabled). This setting will not affect the current session. To change its status in the current session, you need to start or stop it.

View a Status Messages

To view the status message of a service, select it from the list and choose Show Details. The output you will see is identical to the one generated by the command systemctl -l status MY_SERVICE.

Warning
Warning: Faulty Runlevel Settings May Damage Your System

Faulty runlevel settings may make your system unusable. Before applying your changes, make absolutely sure that you know their consequences.

14.5 Customization of systemd

The following sections contain some examples for systemd customization.

Warning
Warning: Avoiding Overwritten Customization

Always do systemd customization in /etc/systemd/, never in /usr/lib/systemd/. Otherwise your changes will be overwritten by the next update of systemd.

14.5.1 Customizing Service Files

The systemd service files are located in /usr/lib/systemd/system. If you want to customize them, proceed as follows:

  1. Copy the files you want to modify from /usr/lib/systemd/system to /etc/systemd/system. Keep the file names identical to the original ones.

  2. Modify the copies in /etc/systemd/system according to your needs.

  3. For an overview of your configuration changes, use the systemd-delta command. It can compare and identify configuration files that override other configuration files. For details, refer to the systemd-delta man page.

The modified files in /etc/systemd will take precedence over the original files in /usr/lib/systemd/system, provided that their file name is the same.

14.5.2 Creating Drop-in Files

If you only want to add a few lines to a configuration file or modify a small part of it, you can use so-called drop-in files. Drop-in files let you extend the configuration of unit files without having to edit or override the unit files themselves.

For example, to change one value for the FOOBAR service located in /usr/lib/systemd/system/FOOBAR.SERVICE, proceed as follows:

  1. Create a directory called /etc/systemd/system/MY_SERVICE.service.d/.

    Note the .d suffix. The directory must otherwise be named like the service that you want to patch with the drop-in file.

  2. In that directory, create a file WHATEVERMODIFICATION.conf.

    Make sure it only contains the line with the value that you want to modify.

  3. Save your changes to the file. It will be used as an extension of the original file.

14.5.3 Creating Custom Targets

On System V init SUSE systems, runlevel 4 is unused to allow administrators to create their own runlevel configuration. systemd allows you to create any number of custom targets. It is suggested to start by adapting an existing target such as graphical.target.

  1. Copy the configuration file /usr/lib/systemd/system/graphical.target to /etc/systemd/system/MY_TARGET.target and adjust it according to your needs.

  2. The configuration file copied in the previous step already covers the required (hard) dependencies for the target. To also cover the wanted (soft) dependencies, create a directory /etc/systemd/system/MY_TARGET.target.wants.

  3. For each wanted service, create a symbolic link from /usr/lib/systemd/system into /etc/systemd/system/MY_TARGET.target.wants.

  4. Once you have finished setting up the target, reload the systemd configuration to make the new target available:

    systemctl daemon-reload

14.6 Advanced Usage

The following sections cover advanced topics for system administrators. For even more advanced systemd documentation, refer to Lennart Pöttering's series about systemd for administrators at http://0pointer.de/blog/projects.

14.6.1 Cleaning Temporary Directories

systemd supports cleaning temporary directories regularly. The configuration from the previous system version is automatically migrated and active. tmpfiles.d—which is responsible for managing temporary files—reads its configuration from /etc/tmpfiles.d/*.conf , /run/tmpfiles.d/*.conf, and /usr/lib/tmpfiles.d/*.conf files. Configuration placed in /etc/tmpfiles.d/*.conf overrides related configurations from the other two directories (/usr/lib/tmpfiles.d/*.conf is where packages store their configuration files).

The configuration format is one line per path containing action and path, and optionally mode, ownership, age and argument fields, depending on the action. The following example unlinks the X11 lock files:

Type Path               Mode UID  GID  Age Argument
r    /tmp/.X[0-9]*-lock

To get the status the tmpfile timer:

systemctl status systemd-tmpfiles-clean.timer
systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories
 Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-clean.timer; static)
 Active: active (waiting) since Tue 2014-09-09 15:30:36 CEST; 1 weeks 6 days ago
   Docs: man:tmpfiles.d(5)
         man:systemd-tmpfiles(8)

Sep 09 15:30:36 jupiter systemd[1]: Starting Daily Cleanup of Temporary Directories.
Sep 09 15:30:36 jupiter systemd[1]: Started Daily Cleanup of Temporary Directories.

For more information on temporary files handling, see man 5 tmpfiles.d.

14.6.2 System Log

Section 14.6.8, “Debugging Services” explains how to view log messages for a given service. However, displaying log messages is not restricted to service logs. You can also access and query the complete log messages written by systemd—the so-called Journal. Use the command systemd-journalctl to display the complete log messages starting with the oldest entries. Refer to man 1 systemd-journalctl for options such as applying filters or changing the output format.

14.6.3 Snapshots

You can save the current state of systemd to a named snapshot and later revert to it with the isolate subcommand. This is useful when testing services or custom targets, because it allows you to return to a defined state at any time. A snapshot is only available in the current session and will automatically be deleted on reboot. A snapshot name must end in .snapshot.

Create a Snapshot
systemctl snapshot MY_SNAPSHOT.snapshot
Delete a Snapshot
systemctl delete MY_SNAPSHOT.snapshot
View a Snapshot
systemctl show MY_SNAPSHOT.snapshot
Activate a Snapshot
systemctl isolate MY_SNAPSHOT.snapshot

14.6.4 Loading Kernel Modules

With systemd, kernel modules can automatically be loaded at boot time via a configuration file in /etc/modules-load.d. The file should be named MODULE.conf and have the following content:

# load module MODULE at boot time
MODULE

In case a package installs a configuration file for loading a kernel module, the file gets installed to /usr/lib/modules-load.d. If two configuration files with the same name exist, the one in /etc/modules-load.d tales precedence.

For more information, see the modules-load.d(5) man page.

14.6.5 Performing Actions before Loading a Service

With System V init actions that need to be performed before loading a service, needed to be specified in /etc/init.d/before.local . This procedure is no longer supported with systemd. If you need to do actions before starting services, do the following:

Loading Kernel Modules

Create a drop-in file in /etc/modules-load.d directory (see man modules-load.d for the syntax)

Creating Files or Directories, Cleaning-up Directories, Changing Ownership

Create a drop-in file in /etc/tmpfiles.d (see man tmpfiles.d for the syntax)

Other Tasks

Create a system service file, for example /etc/systemd/system/before.service, from the following template:

[Unit]
Before=NAME OF THE SERVICE YOU WANT THIS SERVICE TO BE STARTED BEFORE
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=YOUR_COMMAND
# beware, executable is run directly, not through a shell, check the man pages
# systemd.service and systemd.unit for full syntax
[Install]
# target in which to start the service
WantedBy=multi-user.target
#WantedBy=graphical.target

When the service file is created, you should run the following commands (as root):

systemctl daemon-reload
systemctl enable before

Every time you modify the service file, you need to run:

systemctl daemon-reload

14.6.6 Kernel Control Groups (cgroups)

On a traditional System V init system it is not always possible to clearly assign a process to the service that spawned it. Some services, such as Apache, spawn a lot of third-party processes (for example CGI or Java processes), which themselves spawn more processes. This makes a clear assignment difficult or even impossible. Additionally, a service may not terminate correctly, leaving some children alive.

systemd solves this problem by placing each service into its own cgroup. cgroups are a kernel feature that allows aggregating processes and all their children into hierarchical organized groups. systemd names each cgroup after its service. Since a non-privileged process is not allowed to leave its cgroup, this provides an effective way to label all processes spawned by a service with the name of the service.

To list all processes belonging to a service, use the command systemd-cgls. The result will look like the following (shortened) example:

Example 14.3: List all Processes Belonging to a Service
root # systemd-cgls --no-pager
├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 20
├─user.slice
│ └─user-1000.slice
│   ├─session-102.scope
│   │ ├─12426 gdm-session-worker [pam/gdm-password]
│   │ ├─15831 gdm-session-worker [pam/gdm-password]
│   │ ├─15839 gdm-session-worker [pam/gdm-password]
│   │ ├─15858 /usr/lib/gnome-terminal-server

[...]

└─system.slice
  ├─systemd-hostnamed.service
  │ └─17616 /usr/lib/systemd/systemd-hostnamed
  ├─cron.service
  │ └─1689 /usr/sbin/cron -n
  ├─ntpd.service
  │ └─1328 /usr/sbin/ntpd -p /var/run/ntp/ntpd.pid -g -u ntp:ntp -c /etc/ntp.conf
  ├─postfix.service
  │ ├─ 1676 /usr/lib/postfix/master -w
  │ ├─ 1679 qmgr -l -t fifo -u
  │ └─15590 pickup -l -t fifo -u
  ├─sshd.service
  │ └─1436 /usr/sbin/sshd -D

[...]

See Chapter 9, Kernel Control Groups for more information about cgroups.

14.6.7 Terminating Services (Sending Signals)

As explained in Section 14.6.6, “Kernel Control Groups (cgroups)”, it is not always possible to assign a process to its parent service process in a System V init system. This makes it difficult to terminate a service and all of its children. Child processes that have not been terminated will remain as zombie processes.

systemd's concept of confining each service into a cgroup makes it possible to clearly identify all child processes of a service and therefore allows you to send a signal to each of these processes. Use systemctl kill to send signals to services. For a list of available signals refer to man 7 signals.

Sending SIGTERM to a Service

SIGTERM is the default signal that is sent.

systemctl kill MY_SERVICE
Sending SIGNAL to a Service

Use the -s option to specify the signal that should be sent.

systemctl kill -s SIGNAL MY_SERVICE
Selecting Processes

By default the kill command sends the signal to all processes of the specified cgroup. You can restrict it to the control or the main process. The latter is for example useful to force a service to reload its configuration by sending SIGHUP:

systemctl kill -s SIGHUP --kill-who=main MY_SERVICE
Warning
Warning: Terminating or Restarting the D-Bus Service is Not Supported

The D-Bus service is the message bus for communication between systemd clients and the systemd manager that is running as pid 1. Even though dbus is a stand-alone daemon, it is an integral part of the init infrastructure.

Terminating dbus or restarting it in the running system is similar to an attempt to terminate or restart pid 1. It will break systemd client/server communication and make most systemd functions unusable.

Therefore, terminating or restarting dbus is neither recommended nor supported.

14.6.8 Debugging Services

By default, systemd is not overly verbose. If a service was started successfully, no output will be produced. In case of a failure, a short error message will be displayed. However, systemctl status provides means to debug start-up and operation of a service.

systemd comes with its own logging mechanism (The Journal) that logs system messages. This allows you to display the service messages together with status messages. The status command works similar to tail and can also display the log messages in different formats, making it a powerful debugging tool.

Show Service Start-Up Failure

Whenever a service fails to start, use systemctl status MY_SERVICE to get a detailed error message:

root # systemctl start apache2
Job failed. See system journal and 'systemctl status' for details.
root # systemctl status apache2
   Loaded: loaded (/usr/lib/systemd/system/apache2.service; disabled)
   Active: failed (Result: exit-code) since Mon, 04 Jun 2012 16:52:26 +0200; 29s ago
   Process: 3088 ExecStart=/usr/sbin/start_apache2 -D SYSTEMD -k start (code=exited, status=1/FAILURE)
   CGroup: name=systemd:/system/apache2.service

Jun 04 16:52:26 g144 start_apache2[3088]: httpd2-prefork: Syntax error on line
205 of /etc/apache2/httpd.conf: Syntax error on li...alHost>
Show Last N Service Messages

The default behavior of the status subcommand is to display the last ten messages a service issued. To change the number of messages to show, use the --lines=N parameter:

systemctl status ntp
systemctl --lines=20 status ntp
Show Service Messages in Append Mode

To display a live stream of service messages, use the --follow option, which works like tail -f:

systemctl --follow status ntp
Messages Output Format

The --output=MODE parameter allows you to change the output format of service messages. The most important modes available are:

short

The default format. Shows the log messages with a human readable time stamp.

verbose

Full output with all fields.

cat

Terse output without time stamps.

14.7 More Information

For more information on systemd refer to the following online resources:

Homepage

http://www.freedesktop.org/wiki/Software/systemd

systemd for Administrators

Lennart Pöttering, one of the systemd authors, has written a series of blog entries (13 at the time of writing this chapter). Find them at http://0pointer.de/blog/projects.

Part III System

15 32-Bit and 64-Bit Applications in a 64-Bit System Environment

SUSE® Linux Enterprise Desktop is available for 64-bit platforms. This does not necessarily mean that all the applications included have already been ported to 64-bit platforms. SUSE Linux Enterprise Desktop supports the use of 32-bit applications in a 64-bit system environment. This chapter offers …

16 journalctl: Query the systemd Journal

When systemd replaced traditional init scripts in SUSE Linux Enterprise 12 (see Chapter 14, The systemd Daemon), it introduced its own logging system called journal. There is no need to run a syslog based service anymore, as all system events are written in the journal.

17 Basic Networking

Linux offers the necessary networking tools and features for integration into all types of network structures. Network access using a network card can be configured with YaST. Manual configuration is also possible. In this chapter only the fundamental mechanisms and the relevant network configuration files are covered.

18 Printer Operation

SUSE® Linux Enterprise Desktop supports printing with many types of printers, including remote network printers. Printers can be configured manually or with YaST. For configuration instructions, refer to Section 8.3, “Setting Up a Printer”. Both graphical and command line utilities are available for…

19 The X Window System

The X Window System (X11) is the de facto standard for graphical user interfaces in Unix. X is network-based, enabling applications started on one host to be displayed on another host connected over any kind of network (LAN or Internet). This chapter provides basic information on the X configuration…

20 Accessing File Systems with FUSE

FUSE is the acronym for file system in user space. This means you can configure and mount a file system as an unprivileged user. Normally, you need to be root for this task. FUSE alone is a kernel module. Combined with plug-ins, it allows you to extend FUSE to access almost all file systems like remote SSH connections, ISO images, and more.

21 Managing Kernel Modules

Although Linux is a monolithic kernel, it can be extended using kernel modules. These are special objects that can be inserted into the kernel and removed on demand. In practical terms, kernel modules make it possible to add and remove drivers and interfaces that are not included in the kernel itsel…

22 Dynamic Kernel Device Management with udev

The kernel can add or remove almost any device in a running system. Changes in the device state (whether a device is plugged in or removed) need to be propagated to user space. Devices need to be configured when they are plugged in and recognized. Users of a certain device need to be informed about …

23 Live Patching the Linux Kernel Using kGraft

This document describes the basic principles of the kGraft live patching technology and provides usage guidelines for the SLE Live Patching service.

kGraft is a live patching technology for runtime patching of the Linux kernel, without stopping the kernel. This maximizes system uptime, and thus system availability, which is important for mission-critical systems. By allowing dynamic patching of the kernel, the technology also encourages users to install critical security updates without deferring them to a scheduled downtime.

A kGraft patch is a kernel module, intended for replacing whole functions in the kernel. kGraft primarily offers in-kernel infrastructure for integration of the patched code with base kernel code at runtime.

SLE Live Patching is a service provided on top of regular SUSE Linux Enterprise Server maintenance. kGraft patches distributed through SLE Live Patching supplement regular SLES maintenance updates. Common update stack and procedures can be used for SLE Live Patching deployment.

The information provided in this document related to the AMD64/Intel 64 and POWER architectures. In case you use a different architecture, the procedures may differ.

24 Special System Features

This chapter starts with information about various software packages, the virtual consoles and the keyboard layout. We talk about software components like bash, cron and logrotate, because they were changed or enhanced during the last release cycles. Even if they are small or considered of minor importance, users should change their default behavior, because these components are often closely coupled with the system. The chapter concludes with a section about language and country-specific settings (I18N and L10N).

15 32-Bit and 64-Bit Applications in a 64-Bit System Environment

  • Filename: 64bit_issues.xml
  • ID: cha.64bit

SUSE® Linux Enterprise Desktop is available for 64-bit platforms. This does not necessarily mean that all the applications included have already been ported to 64-bit platforms. SUSE Linux Enterprise Desktop supports the use of 32-bit applications in a 64-bit system environment. This chapter offers a brief overview of how this support is implemented on 64-bit SUSE Linux Enterprise Desktop platforms. It explains how 32-bit applications are executed and how 32-bit applications should be compiled to enable them to run both in 32-bit and 64-bit system environments. Additionally, find information about the kernel API and an explanation of how 32-bit applications can run under a 64-bit kernel.

SUSE Linux Enterprise Desktop for the 64-bit platforms amd64 and Intel 64 is designed so that existing 32-bit applications run in the 64-bit environment out-of-the-box. This support means that you can continue to use your preferred 32-bit applications without waiting for a corresponding 64-bit port to become available.

15.1 Runtime Support

Important
Important: Conflicts Between Application Versions

If an application is available both for 32-bit and 64-bit environments, parallel installation of both versions is bound to lead to problems. In such cases, decide on one of the two versions and install and use this.

An exception to this rule is PAM (pluggable authentication modules). SUSE Linux Enterprise Desktop uses PAM in the authentication process as a layer that mediates between user and application. On a 64-bit operating system that also runs 32-bit applications it is necessary to always install both versions of a PAM module.

To be executed correctly, every application requires a range of libraries. Unfortunately, the names for the 32-bit and 64-bit versions of these libraries are identical. They must be differentiated from each other in another way.

To retain compatibility with the 32-bit version, the libraries are stored at the same place in the system as in the 32-bit environment. The 32-bit version of libc.so.6 is located under /lib/libc.so.6 in both the 32-bit and 64-bit environments.

All 64-bit libraries and object files are located in directories called lib64. The 64-bit object files that you would normally expect to find under /lib and /usr/lib are now found under /lib64 and /usr/lib64. This means that there is space for the 32-bit libraries under /lib and /usr/lib, so the file name for both versions can remain unchanged.

Subdirectories of 32-bit /lib directories which contain data content that does not depend on the word size are not moved. This scheme conforms to LSB (Linux Standards Base) and FHS (File System Hierarchy Standard).

15.2 Software Development

Both 32-bit and 64-bit objects can be generated with a biarch development toolchain. A biarch development toolchain allows generation of 32-bit and 64-bit objects. The compilation of 64-bit objects is the default on almost all platforms. 32-bit objects can be generated if special flags are used. This special flag is -m32 for GCC. The flags for the binutils are architecture-dependent, but GCC transfers the correct flags to linkers and assemblers. A biarch development toolchain currently exists for amd64 (supports development for x86 and amd64 instructions), for z Systems and for POWER. 32-bit objects are normally created on the POWER platform. The -m64 flag must be used to generate 64-bit objects.

A biarch development toolchain allows generation of 32-bit and 64-bit objects. The default is to compile 64-bit objects. It is possible to generate 32-bit objects by using special flags. For GCC, this special flag is -m32.

All header files must be written in an architecture-independent form. The installed 32-bit and 64-bit libraries must have an API (application programming interface) that matches the installed header files. The normal SUSE Linux Enterprise Desktop environment is designed according to this principle. In the case of manually updated libraries, resolve these issues yourself.

15.3 Software Compilation on Biarch Platforms

To develop binaries for the other architecture on a biarch architecture, the respective libraries for the second architecture must additionally be installed. These packages are called rpmname-32bit. You also need the respective headers and libraries from the rpmname-devel packages and the development libraries for the second architecture from rpmname-devel-32bit.

Most open source programs use an autoconf-based program configuration. To use autoconf for configuring a program for the second architecture, overwrite the normal compiler and linker settings of autoconf by running the configure script with additional environment variables.

The following example refers to an x86_64 system with x86 as the second architecture.

  1. Use the 32-bit compiler:

    CC="gcc -m32"
  2. Instruct the linker to process 32-bit objects (always use gcc as the linker front-end):

    LD="gcc -m32"
  3. Set the assembler to generate 32-bit objects:

    AS="gcc -c -m32"
  4. Specify linker flags, such as the location of 32-bit libraries, for example:

    LDFLAGS="-L/usr/lib"
  5. Specify the location for the 32-bit object code libraries:

    --libdir=/usr/lib
  6. Specify the location for the 32-bit X libraries:

    --x-libraries=/usr/lib

Not all of these variables are needed for every program. Adapt them to the respective program.

An example configure call to compile a native 32-bit application on x86_64 could appear as follows:

CC="gcc -m32"
LDFLAGS="-L/usr/lib;"
./configure --prefix=/usr --libdir=/usr/lib --x-libraries=/usr/lib
make
make install

15.4 Kernel Specifications

The 64-bit kernels for AMD64/Intel 64 offer both a 64-bit and a 32-bit kernel ABI (application binary interface). The latter is identical with the ABI for the corresponding 32-bit kernel. This means that the 32-bit application can communicate with the 64-bit kernel in the same way as with the 32-bit kernel.

The 32-bit emulation of system calls for a 64-bit kernel does not support all the APIs used by system programs. This depends on the platform. For this reason, few applications, like lspci, must be compiled.

A 64-bit kernel can only load 64-bit kernel modules that have been specially compiled for this kernel. It is not possible to use 32-bit kernel modules.

Tip
Tip: Kernel-loadable Modules

Some applications require separate kernel-loadable modules. If you intend to use such a 32-bit application in a 64-bit system environment, contact the provider of this application and SUSE to make sure that the 64-bit version of the kernel-loadable module and the 32-bit compiled version of the kernel API are available for this module.

16 journalctl: Query the systemd Journal

  • Filename: journalctl.xml
  • ID: cha.journalctl

When systemd replaced traditional init scripts in SUSE Linux Enterprise 12 (see Chapter 14, The systemd Daemon), it introduced its own logging system called journal. There is no need to run a syslog based service anymore, as all system events are written in the journal.

The journal itself is a system service managed by systemd. Its full name is systemd-journald.service. It collects and stores logging data by maintaining structured indexed journals based on logging information received from the kernel, user processes, standard input, and system service errors. The systemd-journald service is on by default:

# systemctl status systemd-journald
systemd-journald.service - Journal Service
   Loaded: loaded (/usr/lib/systemd/system/systemd-journald.service; static)
   Active: active (running) since Mon 2014-05-26 08:36:59 EDT; 3 days ago
     Docs: man:systemd-journald.service(8)
           man:journald.conf(5)
 Main PID: 413 (systemd-journal)
   Status: "Processing requests..."
   CGroup: /system.slice/systemd-journald.service
           └─413 /usr/lib/systemd/systemd-journald
[...]

16.1 Making the Journal Persistent

The journal stores log data in /run/log/journal/ by default. Because the /run/ directory is volatile by nature, log data is lost at reboot. To make the log data persistent, the directory /var/log/journal/ with correct ownership and permissions must exist, where the systemd-journald service can store its data. systemd will create the directory for you—and switch to persistent logging—if you do the following:

  1. As root, open /etc/systemd/journald.conf for editing.

    # vi /etc/systemd/journald.conf
  2. Uncomment the line containing Storage= and change it to

    [...]
    [Journal]
    Storage=persistent
    #Compress=yes
    [...]
  3. Save the file and restart systemd-journald:

    systemctl restart systemd-journald

16.2 journalctl Useful Switches

This section introduces several common useful options to enhance the default journalctl behavior. All switches are described in the journalctl manual page, man 1 journalctl.

Tip
Tip: Messages Related to a Specific Executable

To show all journal messages related to a specific executable, specify the full path to the executable:

journalctl /usr/lib/systemd/systemd
-f

Shows only the most recent journal messages, and prints new log entries as they are added to the journal.

-e

Prints the messages and jumps to the end of the journal, so that the latest entries are visible within the pager.

-r

Prints the messages of the journal in reverse order, so that the latest entries are listed first.

-k

Shows only kernel messages. This is equivalent to the field match _TRANSPORT=kernel (see Section 16.3.3, “Filtering Based on Fields”).

-u

Shows only messages for the specified systemd unit. This is equivalent to the field match _SYSTEMD_UNIT=UNIT (see Section 16.3.3, “Filtering Based on Fields”).

# journalctl -u apache2
[...]
Jun 03 10:07:11 pinkiepie systemd[1]: Starting The Apache Webserver...
Jun 03 10:07:12 pinkiepie systemd[1]: Started The Apache Webserver.

16.3 Filtering the Journal Output

When called without switches, journalctl shows the full content of the journal, the oldest entries listed first. The output can be filtered by specific switches and fields.

16.3.1 Filtering Based on a Boot Number

journalctl can filter messages based on a specific system boot. To list all available boots, run

# journalctl --list-boots
-1 097ed2cd99124a2391d2cffab1b566f0 Mon 2014-05-26 08:36:56 EDT—Fri 2014-05-30 05:33:44 EDT
 0 156019a44a774a0bb0148a92df4af81b Fri 2014-05-30 05:34:09 EDT—Fri 2014-05-30 06:15:01 EDT

The first column lists the boot offset: 0 for the current boot, -1 for the previous one, -2 for the one prior to that, etc. The second column contains the boot ID followed by the limiting time stamps of the specific boot.

Show all messages from the current boot:

# journalctl -b

If you need to see journal messages from the previous boot, add an offset parameter. The following example outputs the previous boot messages:

# journalctl -b -1

Another way is to list boot messages based on the boot ID. For this purpose, use the _BOOT_ID field:

# journalctl _BOOT_ID=156019a44a774a0bb0148a92df4af81b

16.3.2 Filtering Based on Time Interval

You can filter the output of journalctl by specifying the starting and/or ending date. The date specification should be of the format "2014-06-30 9:17:16". If the time part is omitted, midnight is assumed. If seconds are omitted, ":00" is assumed. If the date part is omitted, the current day is assumed. Instead of numeric expression, you can specify the keywords "yesterday", "today", or "tomorrow". They refer to midnight of the day before the current day, of the current day, or of the day after the current day. If you specify "now", it refers to the current time. You can also specify relative times prefixed with - or +, referring to times before or after the current time.

Show only new messages since now, and update the output continuously:

# journalctl --since "now" -f

Show all messages since last midnight till 3:20am:

# journalctl --since "today" --until "3:20"

16.3.3 Filtering Based on Fields

You can filter the output of the journal by specific fields. The syntax of a field to be matched is FIELD_NAME=MATCHED_VALUE, such as _SYSTEMD_UNIT=httpd.service. You can specify multiple matches in a single query to filter the output messages even more. See man 7 systemd.journal-fields for a list of default fields.

Show messages produced by a specific process ID:

# journalctl _PID=1039

Show messages belonging to a specific user ID:

# journalctl _UID=1000

Show messages from the kernel ring buffer (the same as dmesg produces):

# journalctl _TRANSPORT=kernel

Show messages from the service's standard or error output:

# journalctl _TRANSPORT=stdout

Show messages produced by a specified service only:

# journalctl _SYSTEMD_UNIT=avahi-daemon.service

If two different fields are specified, only entries that match both expressions at the same time are shown:

# journalctl _SYSTEMD_UNIT=avahi-daemon.service _PID=1488

If two matches refer to the same field, all entries matching either expression are shown:

# journalctl _SYSTEMD_UNIT=avahi-daemon.service _SYSTEMD_UNIT=dbus.service

You can use the '+' separator to combine two expressions in a logical 'OR'. The following example shows all messages from the Avahi service process with the process ID 1480 together with all messages from the D-Bus service:

# journalctl _SYSTEMD_UNIT=avahi-daemon.service _PID=1480 + _SYSTEMD_UNIT=dbus.service

16.4 Investigating systemd Errors

This section introduces a simple example to illustrate how to find and fix the error reported by systemd during apache2 start-up.

  1. Try to start the apache2 service:

    # systemctl start apache2
    Job for apache2.service failed. See 'systemctl status apache2' and 'journalctl -xn' for details.
  2. Let us see what the service's status says:

    # systemctl status apache2
    apache2.service - The Apache Webserver
       Loaded: loaded (/usr/lib/systemd/system/apache2.service; disabled)
       Active: failed (Result: exit-code) since Tue 2014-06-03 11:08:13 CEST; 7min ago
      Process: 11026 ExecStop=/usr/sbin/start_apache2 -D SYSTEMD -DFOREGROUND \
               -k graceful-stop (code=exited, status=1/FAILURE)

    The ID of the process causing the failure is 11026.

  3. Show the verbose version of messages related to process ID 11026:

    # journalctl -o verbose _PID=11026
    [...]
    MESSAGE=AH00526: Syntax error on line 6 of /etc/apache2/default-server.conf:
    [...]
    MESSAGE=Invalid command 'DocumenttRoot', perhaps misspelled or defined by a module
    [...]
  4. Fix the typo inside /etc/apache2/default-server.conf, start the apache2 service, and print its status:

    # systemctl start apache2 && systemctl status apache2
    apache2.service - The Apache Webserver
       Loaded: loaded (/usr/lib/systemd/system/apache2.service; disabled)
       Active: active (running) since Tue 2014-06-03 11:26:24 CEST; 4ms ago
      Process: 11026 ExecStop=/usr/sbin/start_apache2 -D SYSTEMD -DFOREGROUND
               -k graceful-stop (code=exited, status=1/FAILURE)
     Main PID: 11263 (httpd2-prefork)
       Status: "Processing requests..."
       CGroup: /system.slice/apache2.service
               ├─11263 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]
               ├─11280 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]
               ├─11281 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]
               ├─11282 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]
               ├─11283 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]
               └─11285 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]

16.5 Journald Configuration

The behavior of the systemd-journald service can be adjusted by modifying /etc/systemd/journald.conf. This section introduces only basic option settings. For a complete file description, see man 5 journald.conf. Note that you need to restart the journal for the changes to take effect with

# systemctl restart systemd-journald

16.5.1 Changing the Journal Size Limit

If the journal log data is saved to a persistent location (see Section 16.1, “Making the Journal Persistent”), it uses up to 10% of the file system the /var/log/journal resides on. For example, if /var/log/journal is located on a 30 GB /var partition, the journal may use up to 3 GB of the disk space. To change this limit, change (and uncomment) the SystemMaxUse option:

SystemMaxUse=50M

16.5.2 Forwarding the Journal to /dev/ttyX

You can forward the journal to a terminal device to inform you about system messages on a preferred terminal screen, for example /dev/tty12. Change the following journald options to

ForwardToConsole=yes
TTYPath=/dev/tty12

16.5.3 Forwarding the Journal to Syslog Facility

Journald is backward compatible with traditional syslog implementations such as rsyslog. Make sure the following is valid:

  • rsyslog is installed.

    # rpm -q rsyslog
    rsyslog-7.4.8-2.16.x86_64
  • rsyslog service is enabled.

    # systemctl is-enabled rsyslog
    enabled
  • Forwarding to syslog is enabled in /etc/systemd/journald.conf.

    ForwardToSyslog=yes

16.6 Using YaST to Filter the systemd Journal

For an easy way of filtering the systemd journal (without having to deal with the journalctl syntax), you can use the YaST journal module. After installing it with sudo zypper in yast2-journal, start it from YaST by selecting System › Systemd Journal. Alternatively, start it from command line by entering sudo yast2 journal.

YaST systemd Journal
Figure 16.1: YaST systemd Journal

The module displays the log entries in a table. The search box on top allows you to search for entries that contain certain characters, similar to using grep. To filter the entries by date and time, unit, file, or priority, click Change filters and set the respective options.

17 Basic Networking

  • Filename: net_basic.xml
  • ID: cha.basicnet
Abstract

Linux offers the necessary networking tools and features for integration into all types of network structures. Network access using a network card can be configured with YaST. Manual configuration is also possible. In this chapter only the fundamental mechanisms and the relevant network configuration files are covered.

Linux and other Unix operating systems use the TCP/IP protocol. It is not a single network protocol, but a family of network protocols that offer various services. The protocols listed in Several Protocols in the TCP/IP Protocol Family, are provided for exchanging data between two machines via TCP/IP. Networks combined by TCP/IP, comprising a worldwide network, are also called the Internet.

RFC stands for Request for Comments. RFCs are documents that describe various Internet protocols and implementation procedures for the operating system and its applications. The RFC documents describe the setup of Internet protocols. For more information about RFCs, see http://www.ietf.org/rfc.html.

Several Protocols in the TCP/IP Protocol Family
TCP

Transmission Control Protocol: a connection-oriented secure protocol. The data to transmit is first sent by the application as a stream of data and converted into the appropriate format by the operating system. The data arrives at the respective application on the destination host in the original data stream format it was initially sent. TCP determines whether any data has been lost or jumbled during the transmission. TCP is implemented wherever the data sequence matters.

UDP

User Datagram Protocol: a connectionless, insecure protocol. The data to transmit is sent in the form of packets generated by the application. The order in which the data arrives at the recipient is not guaranteed and data loss is possible. UDP is suitable for record-oriented applications. It features a smaller latency period than TCP.

ICMP

Internet Control Message Protocol: This is not a protocol for the end user, but a special control protocol that issues error reports and can control the behavior of machines participating in TCP/IP data transfer. In addition, it provides a special echo mode that can be viewed using the program ping.

IGMP

Internet Group Management Protocol: This protocol controls machine behavior when implementing IP multicast.

As shown in Figure 17.1, “Simplified Layer Model for TCP/IP”, data exchange takes place in different layers. The actual network layer is the insecure data transfer via IP (Internet protocol). On top of IP, TCP (transmission control protocol) guarantees, to a certain extent, security of the data transfer. The IP layer is supported by the underlying hardware-dependent protocol, such as Ethernet.

Simplified Layer Model for TCP/IP
Figure 17.1: Simplified Layer Model for TCP/IP

The diagram provides one or two examples for each layer. The layers are ordered according to abstraction levels. The lowest layer is very close to the hardware. The uppermost layer, however, is almost a complete abstraction from the hardware. Every layer has its own special function. The special functions of each layer are mostly implicit in their description. The data link and physical layers represent the physical network used, such as Ethernet.

Almost all hardware protocols work on a packet-oriented basis. The data to transmit is collected into packets (it cannot be sent all at once). The maximum size of a TCP/IP packet is approximately 64 KB. Packets are normally quite smaller, as the network hardware can be a limiting factor. The maximum size of a data packet on an Ethernet is about fifteen hundred bytes. The size of a TCP/IP packet is limited to this amount when the data is sent over an Ethernet. If more data is transferred, more data packets need to be sent by the operating system.

For the layers to serve their designated functions, additional information regarding each layer must be saved in the data packet. This takes place in the header of the packet. Every layer attaches a small block of data, called the protocol header, to the front of each emerging packet. A sample TCP/IP data packet traveling over an Ethernet cable is illustrated in Figure 17.2, “TCP/IP Ethernet Packet”. The proof sum is located at the end of the packet, not at the beginning. This simplifies things for the network hardware.

TCP/IP Ethernet Packet
Figure 17.2: TCP/IP Ethernet Packet

When an application sends data over the network, the data passes through each layer, all implemented in the Linux kernel except the physical layer. Each layer is responsible for preparing the data so it can be passed to the next layer. The lowest layer is ultimately responsible for sending the data. The entire procedure is reversed when data is received. Like the layers of an onion, in each layer the protocol headers are removed from the transported data. Finally, the transport layer is responsible for making the data available for use by the applications at the destination. In this manner, one layer only communicates with the layer directly above or below it. For applications, it is irrelevant whether data is transmitted via a 100 Mbit/s FDDI network or via a 56-Kbit/s modem line. Likewise, it is irrelevant for the data line which kind of data is transmitted, as long as packets are in the correct format.

17.1 IP Addresses and Routing

The discussion in this section is limited to IPv4 networks. For information about IPv6 protocol, the successor to IPv4, refer to Section 17.2, “IPv6—The Next Generation Internet”.

17.1.1 IP Addresses

Every computer on the Internet has a unique 32-bit address. These 32 bits (or 4 bytes) are normally written as illustrated in the second row in Example 17.1, “Writing IP Addresses”.

Example 17.1: Writing IP Addresses
IP Address (binary):  11000000 10101000 00000000 00010100
IP Address (decimal):      192.     168.       0.      20

In decimal form, the four bytes are written in the decimal number system, separated by periods. The IP address is assigned to a host or a network interface. It can be used only once throughout the world. There are exceptions to this rule, but these are not relevant to the following passages.

The points in IP addresses indicate the hierarchical system. Until the 1990s, IP addresses were strictly categorized in classes. However, this system proved too inflexible and was discontinued. Now, classless routing (CIDR, classless interdomain routing) is used.

17.1.2 Netmasks and Routing

Netmasks are used to define the address range of a subnet. If two hosts are in the same subnet, they can reach each other directly. If they are not in the same subnet, they need the address of a gateway that handles all the traffic for the subnet. To check if two IP addresses are in the same subnet, simply AND both addresses with the netmask. If the result is identical, both IP addresses are in the same local network. If there are differences, the remote IP address, and thus the remote interface, can only be reached over a gateway.

To understand how the netmask works, look at Example 17.2, “Linking IP Addresses to the Netmask”. The netmask consists of 32 bits that identify how much of an IP address belongs to the network. All those bits that are 1 mark the corresponding bit in the IP address as belonging to the network. All bits that are 0 mark bits inside the subnet. This means that the more bits are 1, the smaller the subnet is. Because the netmask always consists of several successive 1 bits, it is also possible to count the number of bits in the netmask. In Example 17.2, “Linking IP Addresses to the Netmask” the first net with 24 bits could also be written as 192.168.0.0/24.

Example 17.2: Linking IP Addresses to the Netmask
IP address (192.168.0.20):  11000000 10101000 00000000 00010100
Netmask   (255.255.255.0):  11111111 11111111 11111111 00000000
---------------------------------------------------------------
Result of the link:         11000000 10101000 00000000 00000000
In the decimal system:           192.     168.       0.       0

IP address (213.95.15.200): 11010101 10111111 00001111 11001000
Netmask    (255.255.255.0): 11111111 11111111 11111111 00000000
---------------------------------------------------------------
Result of the link:         11010101 10111111 00001111 00000000
In the decimal system:           213.      95.      15.       0

To give another example: all machines connected with the same Ethernet cable are usually located in the same subnet and are directly accessible. Even when the subnet is physically divided by switches or bridges, these hosts can still be reached directly.

IP addresses outside the local subnet can only be reached if a gateway is configured for the target network. In the most common case, there is only one gateway that handles all traffic that is external. However, it is also possible to configure several gateways for different subnets.

If a gateway has been configured, all external IP packets are sent to the appropriate gateway. This gateway then attempts to forward the packets in the same manner—from host to host—until it reaches the destination host or the packet's TTL (time to live) expires.

Specific Addresses
Base Network Address

This is the netmask AND any address in the network, as shown in Example 17.2, “Linking IP Addresses to the Netmask” under Result. This address cannot be assigned to any hosts.

Broadcast Address

This could be paraphrased as: Access all hosts in this subnet. To generate this, the netmask is inverted in binary form and linked to the base network address with a logical OR. The above example therefore results in 192.168.0.255. This address cannot be assigned to any hosts.

Local Host

The address 127.0.0.1 is assigned to the loopback device on each host. A connection can be set up to your own machine with this address and with all addresses from the complete 127.0.0.0/8 loopback network as defined with IPv4. With IPv6 there is only one loopback address (::1).

Because IP addresses must be unique all over the world, you cannot select random addresses. There are three address domains to use if you want to set up a private IP-based network. These cannot get any connection from the rest of the Internet, because they cannot be transmitted over the Internet. These address domains are specified in RFC 1597 and listed in Table 17.1, “Private IP Address Domains”.

Table 17.1: Private IP Address Domains

Network/Netmask

Domain

10.0.0.0/255.0.0.0

10.x.x.x

172.16.0.0/255.240.0.0

172.16.x.x172.31.x.x

192.168.0.0/255.255.0.0

192.168.x.x

17.2 IPv6—The Next Generation Internet

Due to the emergence of the World Wide Web (WWW), the Internet has experienced explosive growth, with an increasing number of computers communicating via TCP/IP in the past fifteen years. Since Tim Berners-Lee at CERN (http://public.web.cern.ch) invented the WWW in 1990, the number of Internet hosts has grown from a few thousand to about a hundred million.

As mentioned, an IPv4 address consists of only 32 bits. Also, quite a few IP addresses are lost—they cannot be used because of the way in which networks are organized. The number of addresses available in your subnet is two to the power of the number of bits, minus two. A subnet has, for example, 2, 6, or 14 addresses available. To connect 128 hosts to the Internet, for example, you need a subnet with 256 IP addresses, from which only 254 are usable, because two IP addresses are needed for the structure of the subnet itself: the broadcast and the base network address.

Under the current IPv4 protocol, DHCP or NAT (network address translation) are the typical mechanisms used to circumvent the potential address shortage. Combined with the convention to keep private and public address spaces separate, these methods can certainly mitigate the shortage. The problem with them lies in their configuration, which is a chore to set up and a burden to maintain. To set up a host in an IPv4 network, you need several address items, such as the host's own IP address, the subnetmask, the gateway address and maybe a name server address. All these items need to be known and cannot be derived from somewhere else.

With IPv6, both the address shortage and the complicated configuration should be a thing of the past. The following sections tell more about the improvements and benefits brought by IPv6 and about the transition from the old protocol to the new one.

17.2.1 Advantages

The most important and most visible improvement brought by the new protocol is the enormous expansion of the available address space. An IPv6 address is made up of 128 bit values instead of the traditional 32 bits. This provides for as many as several quadrillion IP addresses.

However, IPv6 addresses are not only different from their predecessors with regard to their length. They also have a different internal structure that may contain more specific information about the systems and the networks to which they belong. More details about this are found in Section 17.2.2, “Address Types and Structure”.

The following is a list of other advantages of the new protocol:

Autoconfiguration

IPv6 makes the network plug and play capable, which means that a newly set up system integrates into the (local) network without any manual configuration. The new host uses its automatic configuration mechanism to derive its own address from the information made available by the neighboring routers, relying on a protocol called the neighbor discovery (ND) protocol. This method does not require any intervention on the administrator's part and there is no need to maintain a central server for address allocation—an additional advantage over IPv4, where automatic address allocation requires a DHCP server.

Nevertheless if a router is connected to a switch, the router should send periodic advertisements with flags telling the hosts of a network how they should interact with each other. For more information, see RFC 2462 and the radvd.conf(5) man page, and RFC 3315.

Mobility

IPv6 makes it possible to assign several addresses to one network interface at the same time. This allows users to access several networks easily, something that could be compared with the international roaming services offered by mobile phone companies. When you take your mobile phone abroad, the phone automatically logs in to a foreign service when it enters the corresponding area, so you can be reached under the same number everywhere and can place an outgoing call, as you would in your home area.

Secure Communication

With IPv4, network security is an add-on function. IPv6 includes IPsec as one of its core features, allowing systems to communicate over a secure tunnel to avoid eavesdropping by outsiders on the Internet.

Backward Compatibility

Realistically, it would be impossible to switch the entire Internet from IPv4 to IPv6 at one time. Therefore, it is crucial that both protocols can coexist not only on the Internet, but also on one system. This is ensured by compatible addresses (IPv4 addresses can easily be translated into IPv6 addresses) and by using several tunnels. See Section 17.2.3, “Coexistence of IPv4 and IPv6”. Also, systems can rely on a dual stack IP technique to support both protocols at the same time, meaning that they have two network stacks that are completely separate, such that there is no interference between the two protocol versions.

Custom Tailored Services through Multicasting

With IPv4, some services, such as SMB, need to broadcast their packets to all hosts in the local network. IPv6 allows a much more fine-grained approach by enabling servers to address hosts through multicasting, that is by addressing several hosts as parts of a group. This is different from addressing all hosts through broadcasting or each host individually through unicasting). Which hosts are addressed as a group may depend on the concrete application. There are some predefined groups to address all name servers (the all name servers multicast group), for example, or all routers (the all routers multicast group).

17.2.2 Address Types and Structure

As mentioned, the current IP protocol has two major limitations: there is an increasing shortage of IP addresses, and configuring the network and maintaining the routing tables is becoming a more complex and burdensome task. IPv6 solves the first problem by expanding the address space to 128 bits. The second one is mitigated by introducing a hierarchical address structure combined with sophisticated techniques to allocate network addresses, and multihoming (the ability to assign several addresses to one device, giving access to several networks).

When dealing with IPv6, it is useful to know about three different types of addresses:

Unicast

Addresses of this type are associated with exactly one network interface. Packets with such an address are delivered to only one destination. Accordingly, unicast addresses are used to transfer packets to individual hosts on the local network or the Internet.

Multicast

Addresses of this type relate to a group of network interfaces. Packets with such an address are delivered to all destinations that belong to the group. Multicast addresses are mainly used by certain network services to communicate with certain groups of hosts in a well-directed manner.

Anycast

Addresses of this type are related to a group of interfaces. Packets with such an address are delivered to the member of the group that is closest to the sender, according to the principles of the underlying routing protocol. Anycast addresses are used to make it easier for hosts to find out about servers offering certain services in the given network area. All servers of the same type have the same anycast address. Whenever a host requests a service, it receives a reply from the server with the closest location, as determined by the routing protocol. If this server should fail for some reason, the protocol automatically selects the second closest server, then the third one, and so forth.

An IPv6 address is made up of eight four-digit fields, each representing 16 bits, written in hexadecimal notation. They are separated by colons (:). Any leading zero bytes within a given field may be dropped, but zeros within the field or at its end may not. Another convention is that more than four consecutive zero bytes may be collapsed into a double colon. However, only one such :: is allowed per address. This kind of shorthand notation is shown in Example 17.3, “Sample IPv6 Address”, where all three lines represent the same address.

Example 17.3: Sample IPv6 Address
fe80 : 0000 : 0000 : 0000 : 0000 : 10 : 1000 : 1a4
fe80 :    0 :    0 :    0 :    0 : 10 : 1000 : 1a4
fe80 :                           : 10 : 1000 : 1a4

Each part of an IPv6 address has a defined function. The first bytes form the prefix and specify the type of address. The center part is the network portion of the address, but it may be unused. The end of the address forms the host part. With IPv6, the netmask is defined by indicating the length of the prefix after a slash at the end of the address. An address, as shown in Example 17.4, “IPv6 Address Specifying the Prefix Length”, contains the information that the first 64 bits form the network part of the address and the last 64 form its host part. In other words, the 64 means that the netmask is filled with 64 1-bit values from the left. As with IPv4, the IP address is combined with AND with the values from the netmask to determine whether the host is located in the same subnet or in another one.

Example 17.4: IPv6 Address Specifying the Prefix Length
fe80::10:1000:1a4/64

IPv6 knows about several predefined types of prefixes. Some are shown in Various IPv6 Prefixes.

Various IPv6 Prefixes
00

IPv4 addresses and IPv4 over IPv6 compatibility addresses. These are used to maintain compatibility with IPv4. Their use still requires a router able to translate IPv6 packets into IPv4 packets. Several special addresses, such as the one for the loopback device, have this prefix as well.

2 or 3 as the first digit

Aggregatable global unicast addresses. As is the case with IPv4, an interface can be assigned to form part of a certain subnet. Currently, there are the following address spaces: 2001::/16 (production quality address space) and 2002::/16 (6to4 address space).

fe80::/10

Link-local addresses. Addresses with this prefix should not be routed and should therefore only be reachable from within the same subnet.

fec0::/10

Site-local addresses. These may be routed, but only within the network of the organization to which they belong. In effect, they are the IPv6 equivalent of the current private network address space, such as 10.x.x.x.

ff

These are multicast addresses.

A unicast address consists of three basic components:

Public Topology

The first part (which also contains one of the prefixes mentioned above) is used to route packets through the public Internet. It includes information about the company or institution that provides the Internet access.

Site Topology

The second part contains routing information about the subnet to which to deliver the packet.

Interface ID

The third part identifies the interface to which to deliver the packet. This also allows for the MAC to form part of the address. Given that the MAC is a globally unique, fixed identifier coded into the device by the hardware maker, the configuration procedure is substantially simplified. In fact, the first 64 address bits are consolidated to form the EUI-64 token, with the last 48 bits taken from the MAC, and the remaining 24 bits containing special information about the token type. This also makes it possible to assign an EUI-64 token to interfaces that do not have a MAC, such as those based on PPP.

On top of this basic structure, IPv6 distinguishes between five different types of unicast addresses:

:: (unspecified)

This address is used by the host as its source address when the interface is initialized for the first time (at which point, the address cannot yet be determined by other means).

::1 (loopback)

The address of the loopback device.

IPv4 Compatible Addresses

The IPv6 address is formed by the IPv4 address and a prefix consisting of 96 zero bits. This type of compatibility address is used for tunneling (see Section 17.2.3, “Coexistence of IPv4 and IPv6”) to allow IPv4 and IPv6 hosts to communicate with others operating in a pure IPv4 environment.

IPv4 Addresses Mapped to IPv6

This type of address specifies a pure IPv4 address in IPv6 notation.

Local Addresses

There are two address types for local use:

link-local

This type of address can only be used in the local subnet. Packets with a source or target address of this type should not be routed to the Internet or other subnets. These addresses contain a special prefix (fe80::/10) and the interface ID of the network card, with the middle part consisting of zero bytes. Addresses of this type are used during automatic configuration to communicate with other hosts belonging to the same subnet.

site-local

Packets with this type of address may be routed to other subnets, but not to the wider Internet—they must remain inside the organization's own network. Such addresses are used for intranets and are an equivalent of the private address space defined by IPv4. They contain a special prefix (fec0::/10), the interface ID, and a 16 bit field specifying the subnet ID. Again, the rest is filled with zero bytes.

As a completely new feature introduced with IPv6, each network interface normally gets several IP addresses, with the advantage that several networks can be accessed through the same interface. One of these networks can be configured completely automatically using the MAC and a known prefix with the result that all hosts on the local network can be reached when IPv6 is enabled (using the link-local address). With the MAC forming part of it, any IP address used in the world is unique. The only variable parts of the address are those specifying the site topology and the public topology, depending on the actual network in which the host is currently operating.

For a host to go back and forth between different networks, it needs at least two addresses. One of them, the home address, not only contains the interface ID but also an identifier of the home network to which it normally belongs (and the corresponding prefix). The home address is a static address and, as such, it does not normally change. Still, all packets destined to the mobile host can be delivered to it, regardless of whether it operates in the home network or somewhere outside. This is made possible by the completely new features introduced with IPv6, such as stateless autoconfiguration and neighbor discovery. In addition to its home address, a mobile host gets one or more additional addresses that belong to the foreign networks where it is roaming. These are called care-of addresses. The home network has a facility that forwards any packets destined to the host when it is roaming outside. In an IPv6 environment, this task is performed by the home agent, which takes all packets destined to the home address and relays them through a tunnel. On the other hand, those packets destined to the care-of address are directly transferred to the mobile host without any special detours.

17.2.3 Coexistence of IPv4 and IPv6

The migration of all hosts connected to the Internet from IPv4 to IPv6 is a gradual process. Both protocols will coexist for some time to come. The coexistence on one system is guaranteed where there is a dual stack implementation of both protocols. That still leaves the question of how an IPv6 enabled host should communicate with an IPv4 host and how IPv6 packets should be transported by the current networks, which are predominantly IPv4-based. The best solutions offer tunneling and compatibility addresses (see Section 17.2.2, “Address Types and Structure”).

IPv6 hosts that are more or less isolated in the (worldwide) IPv4 network can communicate through tunnels: IPv6 packets are encapsulated as IPv4 packets to move them across an IPv4 network. Such a connection between two IPv4 hosts is called a tunnel. To achieve this, packets must include the IPv6 destination address (or the corresponding prefix) and the IPv4 address of the remote host at the receiving end of the tunnel. A basic tunnel can be configured manually according to an agreement between the hosts' administrators. This is also called static tunneling.

However, the configuration and maintenance of static tunnels is often too labor-intensive to use them for daily communication needs. Therefore, IPv6 provides for three different methods of dynamic tunneling:

6over4

IPv6 packets are automatically encapsulated as IPv4 packets and sent over an IPv4 network capable of multicasting. IPv6 is tricked into seeing the whole network (Internet) as a huge local area network (LAN). This makes it possible to determine the receiving end of the IPv4 tunnel automatically. However, this method does not scale very well and is also hampered because IP multicasting is far from widespread on the Internet. Therefore, it only provides a solution for smaller corporate or institutional networks where multicasting can be enabled. The specifications for this method are laid down in RFC 2529.

6to4

With this method, IPv4 addresses are automatically generated from IPv6 addresses, enabling isolated IPv6 hosts to communicate over an IPv4 network. However, several problems have been reported regarding the communication between those isolated IPv6 hosts and the Internet. The method is described in RFC 3056.

IPv6 Tunnel Broker

This method relies on special servers that provide dedicated tunnels for IPv6 hosts. It is described in RFC 3053.

17.2.4 Configuring IPv6

To configure IPv6, you normally do not need to make any changes on the individual workstations. IPv6 is enabled by default. To disable or enable IPv6 on an installed system, use the YaST Network Settings module. On the Global Options tab, check or uncheck the Enable IPv6 option as necessary. To enable it temporarily until the next reboot, enter modprobe -i ipv6 as root. It is impossible to unload the IPv6 module after it has been loaded.

Because of the autoconfiguration concept of IPv6, the network card is assigned an address in the link-local network. Normally, no routing table management takes place on a workstation. The network routers can be queried by the workstation, using the router advertisement protocol, for what prefix and gateways should be implemented. The radvd program can be used to set up an IPv6 router. This program informs the workstations which prefix to use for the IPv6 addresses and which routers. Alternatively, use zebra/quagga for automatic configuration of both addresses and routing.

For information about how to set up various types of tunnels using the /etc/sysconfig/network files, see the man page of ifcfg-tunnel (man ifcfg-tunnel).

17.2.5 For More Information

The above overview does not cover the topic of IPv6 comprehensively. For a more in-depth look at the new protocol, refer to the following online documentation and books:

http://www.ipv6.org/

The starting point for everything about IPv6.

http://www.ipv6day.org

All information needed to start your own IPv6 network.

http://www.ipv6-to-standard.org/

The list of IPv6-enabled products.

http://www.bieringer.de/linux/IPv6/

Here, find the Linux IPv6-HOWTO and many links related to the topic.

RFC 2460

The fundamental RFC about IPv6.

IPv6 Essentials

A book describing all the important aspects of the topic is IPv6 Essentials by Silvia Hagen (ISBN 0-596-00125-8).

17.3 Name Resolution

DNS assists in assigning an IP address to one or more names and assigning a name to an IP address. In Linux, this conversion is usually carried out by a special type of software known as bind. The machine that takes care of this conversion is called a name server. The names make up a hierarchical system in which each name component is separated by a period. The name hierarchy is, however, independent of the IP address hierarchy described above.

Consider a complete name, such as jupiter.example.com, written in the format hostname.domain. A full name, called a fully qualified domain name (FQDN), consists of a host name and a domain name (example.com). The latter also includes the top level domain or TLD (com).

TLD assignment has become quite confusing for historical reasons. Traditionally, three-letter domain names are used in the USA. In the rest of the world, the two-letter ISO national codes are the standard. In addition to that, longer TLDs were introduced in 2000 that represent certain spheres of activity (for example, .info, .name, .museum).

In the early days of the Internet (before 1990), the file /etc/hosts was used to store the names of all the machines represented over the Internet. This quickly proved to be impractical in the face of the rapidly growing number of computers connected to the Internet. For this reason, a decentralized database was developed to store the host names in a widely distributed manner. This database, similar to the name server, does not have the data pertaining to all hosts in the Internet readily available, but can dispatch requests to other name servers.

The top of the hierarchy is occupied by root name servers. These root name servers manage the top level domains and are run by the Network Information Center (NIC). Each root name server knows about the name servers responsible for a given top level domain. Information about top level domain NICs is available at http://www.internic.net.

DNS can do more than resolve host names. The name server also knows which host is receiving e-mails for an entire domain—the mail exchanger (MX).

For your machine to resolve an IP address, it must know about at least one name server and its IP address. Easily specify such a name server using YaST.

The protocol whois is closely related to DNS. With this program, quickly find out who is responsible for a given domain.

Note
Note: MDNS and .local Domain Names

The .local top level domain is treated as link-local domain by the resolver. DNS requests are send as multicast DNS requests instead of normal DNS requests. If you already use the .local domain in your name server configuration, you must switch this option off in /etc/host.conf. For more information, see the host.conf manual page.

If you want to switch off MDNS during installation, use nomdns=1 as a boot parameter.

For more information on multicast DNS, see http://www.multicastdns.org.

17.4 Configuring a Network Connection with YaST

  • Filename: net_yast.xml
  • ID: sec.basicnet.yast

There are many supported networking types on Linux. Most of them use different device names and the configuration files are spread over several locations in the file system. For a detailed overview of the aspects of manual network configuration, see Section 17.6, “Configuring a Network Connection Manually”.

On SUSE Linux Enterprise Desktop, where NetworkManager is active by default, all network cards are configured. If NetworkManager is not active, only the first interface with link up (with a network cable connected) is automatically configured. Additional hardware can be configured any time on the installed system. The following sections describe the network configuration for all types of network connections supported by SUSE Linux Enterprise Desktop.

17.4.1 Configuring the Network Card with YaST

To configure your Ethernet or Wi-Fi/Bluetooth card in YaST, select System › Network Settings. After starting the module, YaST displays the Network Settings dialog with four tabs: Global Options, Overview, Hostname/DNS and Routing.

The Global Options tab allows you to set general networking options such as the network setup method, IPv6, and general DHCP options. For more information, see Section 17.4.1.1, “Configuring Global Networking Options”.

The Overview tab contains information about installed network interfaces and configurations. Any properly detected network card is listed with its name. You can manually configure new cards, remove or change their configuration in this dialog. To manually configure a card that was not automatically detected, see Section 17.4.1.3, “Configuring an Undetected Network Card”. If you want to change the configuration of an already configured card, see Section 17.4.1.2, “Changing the Configuration of a Network Card”.

The Hostname/DNS tab allows to set the host name of the machine and name the servers to be used. For more information, see Section 17.4.1.4, “Configuring Host Name and DNS”.

The Routing tab is used for the configuration of routing. See Section 17.4.1.5, “Configuring Routing” for more information.

Configuring Network Settings
Figure 17.3: Configuring Network Settings

17.4.1.1 Configuring Global Networking Options

The Global Options tab of the YaST Network Settings module allows you to set important global networking options, such as the use of NetworkManager, IPv6 and DHCP client options. These settings are applicable for all network interfaces.

In the Network Setup Method choose the way network connections are managed. If you want a NetworkManager desktop applet to manage connections for all interfaces, choose NetworkManager Service. NetworkManager is well suited for switching between multiple wired and wireless networks. If you do not run a desktop environment, or if your computer is a Xen server, virtual system, or provides network services such as DHCP or DNS in your network, use the Wicked Service method. If NetworkManager is used, nm-applet should be used to configure network options and the Overview, Hostname/DNS and Routing tabs of the Network Settings module are disabled. For more information on NetworkManager, see Chapter 30, Using NetworkManager.

In the IPv6 Protocol Settings choose whether to use the IPv6 protocol. It is possible to use IPv6 together with IPv4. By default, IPv6 is enabled. However, in networks not using IPv6 protocol, response times can be faster with IPv6 protocol disabled. To disable IPv6, deactivate Enable IPv6. If IPv6 is disabled, the kernel no longer loads the IPv6 module automatically. This setting will be applied after reboot.

In the DHCP Client Options configure options for the DHCP client. The DHCP Client Identifier must be different for each DHCP client on a single network. If left empty, it defaults to the hardware address of the network interface. However, if you are running several virtual machines using the same network interface and, therefore, the same hardware address, specify a unique free-form identifier here.

The Hostname to Send specifies a string used for the host name option field when the DHCP client sends messages to DHCP server. Some DHCP servers update name server zones (forward and reverse records) according to this host name (Dynamic DNS). Also, some DHCP servers require the Hostname to Send option field to contain a specific string in the DHCP messages from clients. Leave AUTO to send the current host name (that is the one defined in /etc/HOSTNAME). Make the option field empty for not sending any host name.

If you do not want to change the default route according to the information from DHCP, deactivate Change Default Route via DHCP.

17.4.1.2 Changing the Configuration of a Network Card

To change the configuration of a network card, select a card from the list of the detected cards in Network Settings › Overview in YaST and click Edit. The Network Card Setup dialog appears in which to adjust the card configuration using the General, Address and Hardware tabs.

17.4.1.2.1 Configuring IP Addresses

You can set the IP address of the network card or the way its IP address is determined in the Address tab of the Network Card Setup dialog. Both IPv4 and IPv6 addresses are supported. The network card can have No IP Address (which is useful for bonding devices), a Statically Assigned IP Address (IPv4 or IPv6) or a Dynamic Address assigned via DHCP or Zeroconf or both.

If using Dynamic Address, select whether to use DHCP Version 4 Only (for IPv4), DHCP Version 6 Only (for IPv6) or DHCP Both Version 4 and 6.

If possible, the first network card with link that is available during the installation is automatically configured to use automatic address setup via DHCP. On SUSE Linux Enterprise Desktop, where NetworkManager is active by default, all network cards are configured.

DHCP should also be used if you are using a DSL line but with no static IP assigned by the ISP (Internet Service Provider). If you decide to use DHCP, configure the details in DHCP Client Options in the Global Options tab of the Network Settings dialog of the YaST network card configuration module. If you have a virtual host setup where different hosts communicate through the same interface, an DHCP Client Identifier is necessary to distinguish them.

DHCP is a good choice for client configuration but it is not ideal for server configuration. To set a static IP address, proceed as follows:

  1. Select a card from the list of detected cards in the Overview tab of the YaST network card configuration module and click Edit.

  2. In the Address tab, choose Statically Assigned IP Address.

  3. Enter the IP Address. Both IPv4 and IPv6 addresses can be used. Enter the network mask in Subnet Mask. If the IPv6 address is used, use Subnet Mask for prefix length in format /64.

    Optionally, you can enter a fully qualified Hostname for this address, which will be written to the /etc/hosts configuration file.

  4. Click Next.

  5. To activate the configuration, click OK.

Note
Note: Interface Activation and Link Detection

During activation of a network interface, wicked checks for a carrier and only applies the IP configuration when a link has been detected. If you need to apply the configuration regardless of the link status (for example, when you want to test a service listening to a certain address), you can skip link detection by adding the variable LINK_REQUIRED=no to the configuration file of the interface in /etc/sysconfig/network/ifcfg.

Additionally, you can use the variable LINK_READY_WAIT=5 to specify the timeout for waiting for a link in seconds.

For more information about the ifcfg-* configuration files, refer to Section 17.6.2.5, “/etc/sysconfig/network/ifcfg-* and man 5 ifcfg.

If you use the static address, the name servers and default gateway are not configured automatically. To configure name servers, proceed as described in Section 17.4.1.4, “Configuring Host Name and DNS”. To configure a gateway, proceed as described in Section 17.4.1.5, “Configuring Routing”.

17.4.1.2.2 Configuring Multiple Addresses

One network device can have multiple IP addresses.

Note
Note: Aliases Are a Compatibility Feature

These so-called aliases or labels, respectively, work with IPv4 only. With IPv6 they will be ignored. Using iproute2 network interfaces can have one or more addresses.

Using YaST to set additional addresses for your network card, proceed as follows:

  1. Select a card from the list of detected cards in the Overview tab of the YaST Network Settings dialog and click Edit.

  2. In the Address › Additional Addresses tab, click Add.

  3. Enter IPv4 Address Label, IP Address, and Netmask. Do not include the interface name in the alias name.

  4. To activate the configuration, confirm the settings.

17.4.1.2.3 Changing the Device Name and Udev Rules

It is possible to change the device name of the network card when it is used. It is also possible to determine whether the network card should be identified by udev via its hardware (MAC) address or via the bus ID. The later option is preferable in large servers to simplify hotplugging of cards. To set these options with YaST, proceed as follows:

  1. Select a card from the list of detected cards in the Overview tab of the YaST Network Settings dialog and click Edit.

  2. Go to the Hardware tab. The current device name is shown in Udev Rules. Click Change.

  3. Select whether udev should identify the card by its MAC Address or Bus ID. The current MAC address and bus ID of the card are shown in the dialog.

  4. To change the device name, check the Change Device Name option and edit the name.

  5. To activate the configuration, confirm the settings.

17.4.1.2.4 Changing Network Card Kernel Driver

For some network cards, several kernel drivers may be available. If the card is already configured, YaST allows you to select a kernel driver to be used from a list of available suitable drivers. It is also possible to specify options for the kernel driver. To set these options with YaST, proceed as follows:

  1. Select a card from the list of detected cards in the Overview tab of the YaST Network Settings module and click Edit.

  2. Go to the Hardware tab.

  3. Select the kernel driver to be used in Module Name. Enter any options for the selected driver in Options in the form = =VALUE. If more options are used, they should be space-separated.

  4. To activate the configuration, confirm the settings.

17.4.1.2.5 Activating the Network Device

If you use the method with wicked, you can configure your device to either start during boot, on cable connection, on card detection, manually, or never. To change device start-up, proceed as follows:

  1. In YaST select a card from the list of detected cards in System › Network Settings and click Edit.

  2. In the General tab, select the desired entry from Device Activation.

    Choose At Boot Time to start the device during the system boot. With On Cable Connection, the interface is watched for any existing physical connection. With On Hotplug, the interface is set when available. It is similar to the At Boot Time option, and only differs in that no error occurs if the interface is not present at boot time. Choose Manually to control the interface manually with ifup. Choose Never to not start the device. The On NFSroot is similar to At Boot Time, but the interface does not shut down with the systemctl stop network command; the network service also cares about the wicked service if wicked is active. Use this if you use an NFS or iSCSI root file system.

  3. To activate the configuration, confirm the settings.

Tip
Tip: NFS as a Root File System

On (diskless) systems where the root partition is mounted via network as an NFS share, you need to be careful when configuring the network device with which the NFS share is accessible.

When shutting down or rebooting the system, the default processing order is to turn off network connections, then unmount the root partition. With NFS root, this order causes problems as the root partition cannot be cleanly unmounted as the network connection to the NFS share is already not activated. To prevent the system from deactivating the relevant network device, open the network device configuration tab as described in Section 17.4.1.2.5, “Activating the Network Device” and choose On NFSroot in the Device Activation pane.

17.4.1.2.6 Setting Up Maximum Transfer Unit Size

You can set a maximum transmission unit (MTU) for the interface. MTU refers to the largest allowed packet size in bytes. A higher MTU brings higher bandwidth efficiency. However, large packets can block up a slow interface for some time, increasing the lag for further packets.

  1. In YaST select a card from the list of detected cards in System › Network Settings and click Edit.

  2. In the General tab, select the desired entry from the Set MTU list.

  3. To activate the configuration, confirm the settings.

17.4.1.2.7 PCIe Multifunction Devices

Multifunction devices that support LAN, iSCSI, and FCoE are supported. YaST FCoE client (yast2 fcoe-client) shows the private flags in additional columns to allow the user to select the device meant for FCoE. YaST network module (yast2 lan) excludes storage only devices for network configuration.

17.4.1.2.8 Infiniband Configuration for IP-over-InfiniBand (IPoIB)
  1. In YaST select the InfiniBand device in System › Network Settings and click Edit.

  2. In the General tab, select one of the IP-over-InfiniBand (IPoIB) modes: connected (default) or datagram.

  3. To activate the configuration, confirm the settings.

For more information about InfiniBand, see /usr/src/linux/Documentation/infiniband/ipoib.txt.

17.4.1.2.9 Configuring the Firewall

Without having to enter the detailed firewall setup as described in Section 15.4.1, “Configuring the Firewall with YaST”, you can determine the basic firewall configuration for your device as part of the device setup. Proceed as follows:

  1. Open the YaST System › Network Settings module. In the Overview tab, select a card from the list of detected cards and click Edit.

  2. Enter the General tab of the Network Settings dialog.

  3. Determine the Firewall Zone to which your interface should be assigned. The following options are available:

    Firewall Disabled

    This option is available only if the firewall is disabled and the firewall does not run. Only use this option if your machine is part of a greater network that is protected by an outer firewall.

    Automatically Assign Zone

    This option is available only if the firewall is enabled. The firewall is running and the interface is automatically assigned to a firewall zone. The zone which contains the keyword any or the external zone will be used for such an interface.

    Internal Zone (Unprotected)

    The firewall is running, but does not enforce any rules to protect this interface. Use this option if your machine is part of a greater network that is protected by an outer firewall. It is also useful for the interfaces connected to the internal network, when the machine has more network interfaces.

    Demilitarized Zone

    A demilitarized zone is an additional line of defense in front of an internal network and the (hostile) Internet. Hosts assigned to this zone can be reached from the internal network and from the Internet, but cannot access the internal network.

    External Zone

    The firewall is running on this interface and fully protects it against other—presumably hostile—network traffic. This is the default option.

  4. To activate the configuration, confirm the settings.

17.4.1.3 Configuring an Undetected Network Card

If a network card is not detected correctly, the card is not included in the list of detected cards. If you are sure that your system includes a driver for your card, you can configure it manually. You can also configure special network device types, such as bridge, bond, TUN or TAP. To configure an undetected network card (or a special device) proceed as follows:

  1. In the System › Network Settings › Overview dialog in YaST click Add.

  2. In the Hardware dialog, set the Device Type of the interface from the available options and Configuration Name. If the network card is a PCMCIA or USB device, activate the respective check box and exit this dialog with Next. Otherwise, you can define the kernel Module Name to be used for the card and its Options, if necessary.

    In Ethtool Options, you can set ethtool options used by ifup for the interface. For information about available options, see the ethtool manual page.

    If the option string starts with a - (for example, -K INTERFACE_NAME rx on), the second word in the string is replaced with the current interface name. Otherwise (for example, autoneg off speed 10) ifup adds -s INTERFACE_NAME to the beginning.

  3. Click Next.

  4. Configure any needed options, such as the IP address, device activation or firewall zone for the interface in the General, Address, and Hardware tabs. For more information about the configuration options, see Section 17.4.1.2, “Changing the Configuration of a Network Card”.

  5. If you selected Wireless as the device type of the interface, configure the wireless connection in the next dialog.

  6. To activate the new network configuration, confirm the settings.

17.4.1.4 Configuring Host Name and DNS

If you did not change the network configuration during installation and the Ethernet card was already available, a host name was automatically generated for your computer and DHCP was activated. The same applies to the name service information your host needs to integrate into a network environment. If DHCP is used for network address setup, the list of domain name servers is automatically filled with the appropriate data. If a static setup is preferred, set these values manually.

To change the name of your computer and adjust the name server search list, proceed as follows:

  1. Go to the Network Settings › Hostname/DNS tab in the System module in YaST.

  2. Enter the Hostname and, if needed, the Domain Name. The domain is especially important if the machine is a mail server. Note that the host name is global and applies to all set network interfaces.

    If you are using DHCP to get an IP address, the host name of your computer will be automatically set by the DHCP. You should disable this behavior if you connect to different networks, because they may assign different host names and changing the host name at runtime may confuse the graphical desktop. To disable using DHCP to get an IP address deactivate Change Hostname via DHCP.

    Assign Hostname to Loopback IP associates your host name with 127.0.0.2 (loopback) IP address in /etc/hosts. This is a useful option if you want to have the host name resolvable at all times, even without active network.

  3. In Modify DNS Configuration, select the way the DNS configuration (name servers, search list, the content of the /etc/resolv.conf file) is modified.

    If the Use Default Policy option is selected, the configuration is handled by the netconfig script which merges the data defined statically (with YaST or in the configuration files) with data obtained dynamically (from the DHCP client or NetworkManager). This default policy is usually sufficient.

    If the Only Manually option is selected, netconfig is not allowed to modify the /etc/resolv.conf file. However, this file can be edited manually.

    If the Custom Policy option is selected, a Custom Policy Rule string defining the merge policy should be specified. The string consists of a comma-separated list of interface names to be considered a valid source of settings. Except for complete interface names, basic wild cards to match multiple interfaces are allowed, as well. For example, eth* ppp? will first target all eth and then all ppp0-ppp9 interfaces. There are two special policy values that indicate how to apply the static settings defined in the /etc/sysconfig/network/config file:

    STATIC

    The static settings need to be merged together with the dynamic settings.

    STATIC_FALLBACK

    The static settings are used only when no dynamic configuration is available.

    For more information, see the man page of netconfig(8) (man 8 netconfig).

  4. Enter the Name Servers and fill in the Domain Search list. Name servers must be specified by IP addresses, such as 192.168.1.116, not by host names. Names specified in the Domain Search tab are domain names used for resolving host names without a specified domain. If more than one Domain Search is used, separate domains with commas or white space.

  5. To activate the configuration, confirm the settings.

It is also possible to edit the host name using YaST from the command line. The changes made by YaST take effect immediately (which is not the case when editing the /etc/HOSTNAME file manually). To change the host name, use the following command:

yast dns edit hostname=HOSTNAME

To change the name servers, use the following commands:

yast dns edit nameserver1=192.168.1.116
yast dns edit nameserver2=192.168.1.117
yast dns edit nameserver3=192.168.1.118

17.4.1.5 Configuring Routing

To make your machine communicate with other machines and other networks, routing information must be given to make network traffic take the correct path. If DHCP is used, this information is automatically provided. If a static setup is used, this data must be added manually.

  1. In YaST go to Network Settings › Routing.

  2. Enter the IP address of the Default Gateway (IPv4 and IPv6 if necessary). The default gateway matches every possible destination, but if a routing table entry exists that matches the required address, this will be used instead of the default route via the Default Gateway.

  3. More entries can be entered in the Routing Table. Enter the Destination network IP address, Gateway IP address and the Netmask. Select the Device through which the traffic to the defined network will be routed (the minus sign stands for any device). To omit any of these values, use the minus sign -. To enter a default gateway into the table, use default in the Destination field.

    Note
    Note: Route Prioritization

    If more default routes are used, it is possible to specify the metric option to determine which route has a higher priority. To specify the metric option, enter - metric NUMBER in Options. The route with the highest metric is used as default. If the network device is disconnected, its route will be removed and the next one will be used. However, the current kernel does not use metric in static routing, only routing daemons like multipathd do.

  4. If the system is a router, enable IPv4 Forwarding and IPv6 Forwarding in the Network Settings as needed.

  5. To activate the configuration, confirm the settings.

17.5 NetworkManager

  • Filename: networkmanager.xml
  • ID: sec.basicnet.nm

NetworkManager is the ideal solution for laptops and other portable computers. With NetworkManager, you do not need to worry about configuring network interfaces and switching between networks when you are moving.

17.5.1 NetworkManager and wicked

However, NetworkManager is not a suitable solution for all cases, so you can still choose between the wicked controlled method for managing network connections and NetworkManager. If you want to manage your network connection with NetworkManager, enable NetworkManager in the YaST Network Settings module as described in Section 30.2, “Enabling or Disabling NetworkManager” and configure your network connections with NetworkManager. For a list of use cases and a detailed description of how to configure and use NetworkManager, refer to Chapter 30, Using NetworkManager.

Some differences between wicked and NetworkManager:

root Privileges

If you use NetworkManager for network setup, you can easily switch, stop or start your network connection at any time from within your desktop environment using an applet. NetworkManager also makes it possible to change and configure wireless card connections without requiring root privileges. For this reason, NetworkManager is the ideal solution for a mobile workstation.

wicked also provides some ways to switch, stop or start the connection with or without user intervention, like user-managed devices. However, this always requires root privileges to change or configure a network device. This is often a problem for mobile computing, where it is not possible to preconfigure all the connection possibilities.

Types of Network Connections

Both wicked and NetworkManager can handle network connections with a wireless network (with WEP, WPA-PSK, and WPA-Enterprise access) and wired networks using DHCP and static configuration. They also support connection through dial-up and VPN. With NetworkManager you can also connect a mobile broadband (3G) modem or set up a DSL connection, which is not possible with the traditional configuration.

NetworkManager tries to keep your computer connected at all times using the best connection available. If the network cable is accidentally disconnected, it tries to reconnect. It can find the network with the best signal strength from the list of your wireless connections and automatically use it to connect. To get the same functionality with wicked, more configuration effort is required.

17.5.2 NetworkManager Functionality and Configuration Files

The individual network connection settings created with NetworkManager are stored in configuration profiles. The system connections configured with either NetworkManager or YaST are saved in /etc/NetworkManager/system-connections/* or in /etc/sysconfig/network/ifcfg-*. For GNOME, all user-defined connections are stored in GConf.

In case no profile is configured, NetworkManager automatically creates one and names it Auto $INTERFACE-NAME. That is made in an attempt to work without any configuration for as many cases as (securely) possible. If the automatically created profiles do not suit your needs, use the network connection configuration dialogs provided by GNOME to modify them as desired. For more information, see Section 30.3, “Configuring Network Connections”.

17.5.3 Controlling and Locking Down NetworkManager Features

On centrally administered machines, certain NetworkManager features can be controlled or disabled with PolKit, for example if a user is allowed to modify administrator defined connections or if a user is allowed to define his own network configurations. To view or change the respective NetworkManager policies, start the graphical Authorizations tool for PolKit. In the tree on the left side, find them below the network-manager-settings entry. For an introduction to PolKit and details on how to use it, refer to Chapter 9, Authorization with PolKit.

17.6 Configuring a Network Connection Manually

  • Filename: net_wicked.xml
  • ID: sec.basicnet.manconf

Manual configuration of the network software should be the last alternative. Using YaST is recommended. However, this background information about the network configuration can also assist your work with YaST.

17.6.1 The wicked Network Configuration

The tool and library called wicked provides a new framework for network configuration.

One of the challenges with traditional network interface management is that different layers of network management get jumbled together into one single script, or at most two different scripts. These scripts interact with each other in a way that is not well-defined. This leads to unpredictable issues, obscure constraints and conventions, etc. Several layers of special hacks for a variety of different scenarios increase the maintenance burden. Address configuration protocols are being used that are implemented via daemons like dhcpcd, which interact rather poorly with the rest of the infrastructure. Funky interface naming schemes that require heavy udev support are introduced to achieve persistent identification of interfaces.

The idea of wicked is to decompose the problem in several ways. None of them is entirely novel, but trying to put ideas from different projects together is hopefully going to create a better solution overall.

One approach is to use a client/server model. This allows wicked to define standardized facilities for things like address configuration that are well integrated with the overall framework. For example, using a specific address configuration, the administrator may request that an interface should be configured via DHCP or IPv4 zeroconf. In this case, the address configuration service simply obtains the lease from its server and passes it on to the wicked server process that installs the requested addresses and routes.

The other approach to decomposing the problem is to enforce the layering aspect. For any type of network interface, it is possible to define a dbus service that configures the network interface's device layer—a VLAN, a bridge, a bonding, or a paravirtualized device. Common functionality, such as address configuration, is implemented by joint services that are layered on top of these device specific services without having to implement them specifically.

The wicked framework implements these two aspects by using a variety of dbus services, which get attached to a network interface depending on its type. Here is a rough overview of the current object hierarchy in wicked.

Each network interface is represented via a child object of /org/opensuse/Network/Interfaces. The name of the child object is given by its ifindex. For example, the loopback interface, which usually gets ifindex 1, is /org/opensuse/Network/Interfaces/1, the first Ethernet interface registered is /org/opensuse/Network/Interfaces/2.

Each network interface has a class associated with it, which is used to select the dbus interfaces it supports. By default, each network interface is of class netif, and wickedd will automatically attach all interfaces compatible with this class. In the current implementation, this includes the following interfaces:

org.opensuse.Network.Interface

Generic network interface functions, such as taking the link up or down, assigning an MTU, etc.

org.opensuse.Network.Addrconf.ipv4.dhcp, org.opensuse.Network.Addrconf.ipv6.dhcp, org.opensuse.Network.Addrconf.ipv4.auto

Address configuration services for DHCP, IPv4 zeroconf, etc.

Beyond this, network interfaces may require or offer special configuration mechanisms. For an Ethernet device, for example, you should be able to control the link speed, offloading of checksumming, etc. To achieve this, Ethernet devices have a class of their own, called netif-ethernet, which is a subclass of netif. As a consequence, the dbus interfaces assigned to an Ethernet interface include all the services listed above, plus the org.opensuse.Network.Ethernet service available only to objects belonging to the netif-ethernet class.

Similarly, there exist classes for interface types like bridges, VLANs, bonds, or infinibands.

How do you interact with an interface like VLAN (which is really a virtual network interface that sits on top of an Ethernet device) that needs to be created first? For this, wicked defines factory interfaces, such as org.opensuse.Network.VLAN.Factory. Such a factory interface offers a single function that lets you create an interface of the requested type. These factory interfaces are attached to the /org/opensuse/Network/Interfaces list node.

17.6.1.1 wicked Architecture and Features

The wicked service comprises several parts as depicted in Figure 17.4, “wicked architecture”.

wicked architecture
Figure 17.4: wicked architecture

wicked currently supports the following:

  • Configuration file back-ends to parse SUSE style /etc/sysconfig/network files.

  • An internal configuration back-end to represent network interface configuration in XML.

  • Bring up and shutdown of normal network interfaces such as Ethernet or InfiniBand, VLAN, bridge, bonds, tun, tap, dummy, macvlan, macvtap, hsi, qeth, iucv, and wireless (currently limited to one wpa-psk/eap network) devices.

  • A built-in DHCPv4 client and a built-in DHCPv6 client.

  • The nanny daemon (enabled by default) helps to automatically bring up configured interfaces when the device is available (interface hotplugging) and set up the IP configuration when a link (carrier) is detected. See Section 17.6.1.3, “Nanny” for more information.

  • wicked was implemented as a group of DBus services that are integrated with systemd. So the usual systemctl commands will apply to wicked.

17.6.1.2 Using wicked

On SUSE Linux Enterprise, wicked is running by default. If you want to check what is currently enabled and whether it is running, call:

On openSUSE Leap wicked is running by default on desktop or server hardware. On mobile hardware NetworkManager is running by default. If you want to check what is currently enabled and whether it is running, call:

systemctl status network

If wicked is enabled, you will see something along these lines:

wicked.service - wicked managed network interfaces
    Loaded: loaded (/usr/lib/systemd/system/wicked.service; enabled)
    ...

In case something different is running (for example, NetworkManager) and you want to switch to wicked, first stop what is running and then enable wicked:

systemctl is-active network && \
systemctl stop      network
systemctl enable --force wicked

This enables the wicked services, creates the network.service to wicked.service alias link, and starts the network at the next boot.

Starting the server process:

systemctl start wickedd

This starts wickedd (the main server) and associated supplicants:

/usr/lib/wicked/bin/wickedd-auto4 --systemd --foreground
/usr/lib/wicked/bin/wickedd-dhcp4 --systemd --foreground
/usr/lib/wicked/bin/wickedd-dhcp6 --systemd --foreground
/usr/sbin/wickedd --systemd --foreground
/usr/sbin/wickedd-nanny --systemd --foreground

Then bringing up the network:

systemctl start wicked

Alternatively use the network.service alias:

systemctl start network

These commands are using the default or system configuration sources as defined in /etc/wicked/client.xml.

To enable debugging, set WICKED_DEBUG in /etc/sysconfig/network/config, for example:

WICKED_DEBUG="all"

Or, to omit some:

WICKED_DEBUG="all,-dbus,-objectmodel,-xpath,-xml"

Use the client utility to display interface information for all interfaces or the interface specified with IFNAME:

wicked show all
wicked show IFNAME

In XML output:

wicked show-xml all
wicked show-xml IFNAME

Bringing up one interface:

wicked ifup eth0
wicked ifup wlan0
...

Because there is no configuration source specified, the wicked client checks its default sources of configuration defined in /etc/wicked/client.xml:

  1. firmware: iSCSI Boot Firmware Table (iBFT)

  2. compat: ifcfg files—implemented for compatibility

Whatever wicked gets from those sources for a given interface is applied. The intended order of importance is firmware, then compat—this may be changed in the future.

For more information, see the wicked man page.

17.6.1.3 Nanny

Nanny is an event and policy driven daemon that is responsible for asynchronous or unsolicited scenarios such as hotplugging devices. Thus the nanny daemon helps with starting or restarting delayed or temporarily gone devices. Nanny monitors device and link changes, and integrates new devices defined by the current policy set. Nanny continues to set up even if ifup already exited because of specified timeout constraints.

By default, the nanny daemon is active on the system. It is enabled in the /etc/wicked/common.xml configuration file:

<config>
  ...
  <use-nanny>true</use-nanny>
</config>

This setting causes ifup and ifreload to apply a policy with the effective configuration to the nanny daemon; then, nanny configures wickedd and thus ensures hotplug support. It waits in the background for events or changes (such as new devices or carrier on).

17.6.1.4 Bringing Up Multiple Interfaces

For bonds and bridges, it may make sense to define the entire device topology in one file (ifcfg-bondX), and bring it up in one go. wicked then can bring up the whole configuration if you specify the top level interface names (of the bridge or bond):

wicked ifup br0

This command automatically sets up the bridge and its dependencies in the appropriate order without the need to list the dependencies (ports, etc.) separately.

To bring up multiple interfaces in one command:

wicked ifup bond0 br0 br1 br2

Or also all interfaces:

wicked ifup all

17.6.1.5 Using Tunnels with Wicked

When you need to use tunnels with Wicked, the TUNNEL_DEVICE is used for this. It permits to specify an optional device name to bind the tunnel to the device. The tunneled packets will only be routed via this device.

For more information, refer to man 5 ifcfg-tunnel.

17.6.1.6 Handling Incremental Changes

With wicked, there is no need to actually take down an interface to reconfigure it (unless it is required by the kernel). For example, to add another IP address or route to a statically configured network interface, add the IP address to the interface definition, and do another ifup operation. The server will try hard to update only those settings that have changed. This applies to link-level options such as the device MTU or the MAC address, and network-level settings, such as addresses, routes, or even the address configuration mode (for example, when moving from a static configuration to DHCP).

Things get tricky of course with virtual interfaces combining several real devices such as bridges or bonds. For bonded devices, it is not possible to change certain parameters while the device is up. Doing that will result in an error.

However, what should still work, is the act of adding or removing the child devices of a bond or bridge, or choosing a bond's primary interface.

17.6.1.7 Wicked Extensions: Address Configuration

wicked is designed to be extensible with shell scripts. These extensions can be defined in the config.xml file.

Currently, several classes of extensions are supported:

  • link configuration: these are scripts responsible for setting up a device's link layer according to the configuration provided by the client, and for tearing it down again.

  • address configuration: these are scripts responsible for managing a device's address configuration. Usually address configuration and DHCP are managed by wicked itself, but can be implemented by means of extensions.

  • firewall extension: these scripts can apply firewall rules.

Typically, extensions have a start and a stop command, an optional pid file, and a set of environment variables that get passed to the script.

To illustrate how this is supposed to work, look at a firewall extension defined in etc/server.xml:

<dbus-service interface="org.opensuse.Network.Firewall">
 <action name="firewallUp"   command="/etc/wicked/extensions/firewall up"/>
 <action name="firewallDown" command="/etc/wicked/extensions/firewall down"/>

 <!-- default environment for all calls to this extension script -->
 <putenv name="WICKED_OBJECT_PATH" value="$object-path"/>
 <putenv name="WICKED_INTERFACE_NAME" value="$property:name"/>
 <putenv name="WICKED_INTERFACE_INDEX" value="$property:index"/>
</dbus-service>

The extension is attached to the <dbus-service> tag and defines commands to execute for the actions of this interface. Further, the declaration can define and initialize environment variables passed to the actions.

17.6.1.8 Wicked Extensions: Configuration Files

You can extend the handling of configuration files with scripts as well. For example, DNS updates from leases are ultimately handled by the extensions/resolver script, with behavior configured in server.xml:

<system-updater name="resolver">
 <action name="backup" command="/etc/wicked/extensions/resolver backup"/>
 <action name="restore" command="/etc/wicked/extensions/resolver restore"/>
 <action name="install" command="/etc/wicked/extensions/resolver install"/>
 <action name="remove" command="/etc/wicked/extensions/resolver remove"/>
</system-updater>

When an update arrives in wickedd, the system updater routines parse the lease and call the appropriate commands (backup, install, etc.) in the resolver script. This in turn configures the DNS settings using /sbin/netconfig, or by manually writing /etc/resolv.conf as a fallback.

17.6.2 Configuration Files

  • Filename: net_config_files.xml
  • ID: sec.basicnet.manconf.files

This section provides an overview of the network configuration files and explains their purpose and the format used.

17.6.2.1 /etc/wicked/common.xml

The /etc/wicked/common.xml file contains common definitions that should be used by all applications. It is sourced/included by the other configuration files in this directory. Although you can use this file to enable debugging across all wicked components, we recommend to use the file /etc/wicked/local.xml for this purpose. After applying maintenance updates you might lose your changes as the /etc/wicked/common.xml might be overwritten. The /etc/wicked/common.xml file includes the /etc/wicked/local.xml in the default installation, thus you typically do not need to modify the /etc/wicked/common.xml.

In case you want to disable nanny by setting the <use-nanny> to false, restart the wickedd.service and then run the following command to apply all configurations and policies:

wicked ifup all
Note
Note: Configuration Files

The wickedd, wicked, or nanny programs try to read /etc/wicked/common.xml if their own configuration files do not exist.

17.6.2.2 /etc/wicked/server.xml

The file /etc/wicked/server.xml is read by the wickedd server process at start-up. The file stores extensions to the /etc/wicked/common.xml. On top of that this file configures handling of a resolver and receiving information from addrconf supplicants, for example DHCP.

We recommend to add changes required to this file into a separate file /etc/wicked/server-local.xml, that gets included by /etc/wicked/server.xml. By using a separate file you avoid overwriting of your changes during maintenance updates.

17.6.2.3 /etc/wicked/client.xml

The /etc/wicked/client.xml is used by the wicked command. The file specifies the location of a script used when discovering devices managed by ibft and configures locations of network interface configurations.

We recommend to add changes required to this file into a separate file /etc/wicked/client-local.xml, that gets included by /etc/wicked/server.xml. By using a separate file you avoid overwriting of your changes during maintenance updates.

17.6.2.4 /etc/wicked/nanny.xml

The /etc/wicked/nanny.xml configures types of link layers. We recommend to add specific configuration into a separate file: /etc/wicked/nanny-local.xml to avoid losing the changes during maintenance updates.

17.6.2.5 /etc/sysconfig/network/ifcfg-*

These files contain the traditional configurations for network interfaces. In SUSE Linux Enterprise 11, this was the only supported format besides iBFT firmware.

Note
Note: wicked and the ifcfg-* Files

wicked reads these files if you specify the compat: prefix. According to the SUSE Linux Enterprise Desktop default configuration in /etc/wicked/client.xml, wicked tries these files before the XML configuration files in /etc/wicked/ifconfig.

The --ifconfig switch is provided mostly for testing only. If specified, default configuration sources defined in /etc/wicked/ifconfig are not applied.

The ifcfg-* files include information such as the start mode and the IP address. Possible parameters are described in the manual page of ifup. Additionally, most variables from the dhcp and wireless files can be used in the ifcfg-* files if a general setting should be used for only one interface. However, most of the /etc/sysconfig/network/config variables are global and cannot be overridden in ifcfg-files. For example, NETCONFIG_* variables are global.

For configuring macvlan and macvtab interfaces, see the ifcfg-macvlan and ifcfg-macvtap man pages. For example, for a macvlan interface provide a ifcfg-macvlan0 with settings as follows:

STARTMODE='auto'
MACVLAN_DEVICE='eth0'
#MACVLAN_MODE='vepa'
#LLADDR=02:03:04:05:06:aa

For ifcfg.template, see Section 17.6.2.6, “/etc/sysconfig/network/config, /etc/sysconfig/network/dhcp, and /etc/sysconfig/network/wireless.

17.6.2.6 /etc/sysconfig/network/config, /etc/sysconfig/network/dhcp, and /etc/sysconfig/network/wireless

The file config contains general settings for the behavior of ifup, ifdown and ifstatus. dhcp contains settings for DHCP and wireless for wireless LAN cards. The variables in all three configuration files are commented. Some variables from /etc/sysconfig/network/config can also be used in ifcfg-* files, where they are given a higher priority. The /etc/sysconfig/network/ifcfg.template file lists variables that can be specified in a per interface scope. However, most of the /etc/sysconfig/network/config variables are global and cannot be overridden in ifcfg-files. For example, NETWORKMANAGER or NETCONFIG_* variables are global.

Note
Note: Using DHCPv6

In SUSE Linux Enterprise 11, DHCPv6 used to work even on networks where IPv6 Router Advertisements (RAs) were not configured properly. Starting with SUSE Linux Enterprise 12, DHCPv6 will correctly require that at least one of the routers on the network sends out RAs that indicate that this network is managed by DHCPv6.

For networks where the router cannot be configured correctly, the ifcfg option allows the user to override this behavior by specifying DHCLIENT6_MODE='managed' in the ifcfg file. You can also activate this workaround with a boot parameter in the installation system:

ifcfg=eth0=dhcp6,DHCLIENT6_MODE=managed

17.6.2.7 /etc/sysconfig/network/routes and /etc/sysconfig/network/ifroute-*

The static routing of TCP/IP packets is determined by the /etc/sysconfig/network/routes and /etc/sysconfig/network/ifroute-* files. All the static routes required by the various system tasks can be specified in /etc/sysconfig/network/routes: routes to a host, routes to a host via a gateway and routes to a network. For each interface that needs individual routing, define an additional configuration file: /etc/sysconfig/network/ifroute-*. Replace the wild card (*) with the name of the interface. The entries in the routing configuration files look like this:

# Destination     Gateway           Netmask            Interface  Options

The route's destination is in the first column. This column may contain the IP address of a network or host or, in the case of reachable name servers, the fully qualified network or host name. The network should be written in CIDR notation (address with the associated routing prefix-length) such as 10.10.0.0/16 for IPv4 or fc00::/7 for IPv6 routes. The keyword default indicates that the route is the default gateway in the same address family as the gateway. For devices without a gateway use explicit 0.0.0.0/0 or ::/0 destinations.

The second column contains the default gateway or a gateway through which a host or network can be accessed.

The third column is deprecated; it used to contain the IPv4 netmask of the destination. For IPv6 routes, the default route, or when using a prefix-length (CIDR notation) in the first column, enter a dash (-) here.

The fourth column contains the name of the interface. If you leave it empty using a dash (-), it can cause unintended behavior in /etc/sysconfig/network/routes. For more information, see the routes man page.

An (optional) fifth column can be used to specify special options. For details, see the routes man page.

Example 17.5: Common Network Interfaces and Some Static Routes
# --- IPv4 routes in CIDR prefix notation:
# Destination     [Gateway]         -                  Interface
127.0.0.0/8       -                 -                  lo
204.127.235.0/24  -                 -                  eth0
default           204.127.235.41    -                  eth0
207.68.156.51/32  207.68.145.45     -                  eth1
192.168.0.0/16    207.68.156.51     -                  eth1

# --- IPv4 routes in deprecated netmask notation"
# Destination     [Dummy/Gateway]   Netmask            Interface
#
127.0.0.0         0.0.0.0           255.255.255.0      lo
204.127.235.0     0.0.0.0           255.255.255.0      eth0
default           204.127.235.41    0.0.0.0            eth0
207.68.156.51     207.68.145.45     255.255.255.255    eth1
192.168.0.0       207.68.156.51     255.255.0.0        eth1

# --- IPv6 routes are always using CIDR notation:
# Destination     [Gateway]                -           Interface
2001:DB8:100::/64 -                        -           eth0
2001:DB8:100::/32 fe80::216:3eff:fe6d:c042 -           eth0

17.6.2.8 /etc/resolv.conf

The domain to which the host belongs is specified in /etc/resolv.conf (keyword search). Up to six domains with a total of 256 characters can be specified with the search option. When resolving a name that is not fully qualified, an attempt is made to generate one by attaching the individual search entries. Up to 3 name servers can be specified with the nameserver option, each on a line of its own. Comments are preceded by hash mark or semicolon signs (# or ;). As an example, see Example 17.6, “/etc/resolv.conf.

However, the /etc/resolv.conf should not be edited by hand. Instead, it is generated by the netconfig script. To define static DNS configuration without using YaST, edit the appropriate variables manually in the /etc/sysconfig/network/config file:

NETCONFIG_DNS_STATIC_SEARCHLIST

list of DNS domain names used for host name lookup

NETCONFIG_DNS_STATIC_SERVERS

list of name server IP addresses to use for host name lookup

NETCONFIG_DNS_FORWARDER

the name of the DNS forwarder that needs to be configured, for example bind or resolver

NETCONFIG_DNS_RESOLVER_OPTIONS

arbitrary options that will be written to /etc/resolv.conf, for example:

debug attempts:1 timeout:10

For more information, see the resolv.conf man page.

NETCONFIG_DNS_RESOLVER_SORTLIST

list of up to 10 items, for example:

130.155.160.0/255.255.240.0 130.155.0.0

For more information, see the resolv.conf man page.

To disable DNS configuration using netconfig, set NETCONFIG_DNS_POLICY=''. For more information about netconfig, see the netconfig(8) man page (man 8 netconfig).

Example 17.6: /etc/resolv.conf
# Our domain
search example.com
#
# We use dns.example.com (192.168.1.116) as nameserver
nameserver 192.168.1.116

17.6.2.9 /sbin/netconfig

netconfig is a modular tool to manage additional network configuration settings. It merges statically defined settings with settings provided by autoconfiguration mechanisms as DHCP or PPP according to a predefined policy. The required changes are applied to the system by calling the netconfig modules that are responsible for modifying a configuration file and restarting a service or a similar action.

netconfig recognizes three main actions. The netconfig modify and netconfig remove commands are used by daemons such as DHCP or PPP to provide or remove settings to netconfig. Only the netconfig update command is available for the user:

modify

The netconfig modify command modifies the current interface and service specific dynamic settings and updates the network configuration. Netconfig reads settings from standard input or from a file specified with the --lease-file FILENAME option and internally stores them until a system reboot (or the next modify or remove action). Already existing settings for the same interface and service combination are overwritten. The interface is specified by the -i INTERFACE_NAME parameter. The service is specified by the -s SERVICE_NAME parameter.

remove

The netconfig remove command removes the dynamic settings provided by a modificatory action for the specified interface and service combination and updates the network configuration. The interface is specified by the -i INTERFACE_NAME parameter. The service is specified by the -s SERVICE_NAME parameter.

update

The netconfig update command updates the network configuration using current settings. This is useful when the policy or the static configuration has changed. Use the -m MODULE_TYPE parameter, if you want to update a specified service only (dns, nis, or ntp).

The netconfig policy and the static configuration settings are defined either manually or using YaST in the /etc/sysconfig/network/config file. The dynamic configuration settings provided by autoconfiguration tools such as DHCP or PPP are delivered directly by these tools with the netconfig modify and netconfig remove actions. When NetworkManager is enabled, netconfig (in policy mode auto) uses only NetworkManager settings, ignoring settings from any other interfaces configured using the traditional ifup method. If NetworkManager does not provide any setting, static settings are used as a fallback. A mixed usage of NetworkManager and the wicked method is not supported.

For more information about netconfig, see man 8 netconfig.

17.6.2.10 /etc/hosts

In this file, shown in Example 17.7, “/etc/hosts, IP addresses are assigned to host names. If no name server is implemented, all hosts to which an IP connection will be set up must be listed here. For each host, enter a line consisting of the IP address, the fully qualified host name, and the host name into the file. The IP address must be at the beginning of the line and the entries separated by blanks and tabs. Comments are always preceded by the # sign.

Example 17.7: /etc/hosts
127.0.0.1 localhost
192.168.2.100 jupiter.example.com jupiter
192.168.2.101 venus.example.com venus

17.6.2.11 /etc/networks

Here, network names are converted to network addresses. The format is similar to that of the hosts file, except the network names precede the addresses. See Example 17.8, “/etc/networks.

Example 17.8: /etc/networks
loopback     127.0.0.0
localnet     192.168.0.0

17.6.2.12 /etc/host.conf

Name resolution—the translation of host and network names via the resolver library—is controlled by this file. This file is only used for programs linked to libc4 or libc5. For current glibc programs, refer to the settings in /etc/nsswitch.conf. Each parameter must always be entered on a separate line. Comments are preceded by a # sign. Table 17.2, “Parameters for /etc/host.conf” shows the parameters available. A sample /etc/host.conf is shown in Example 17.9, “/etc/host.conf.

Table 17.2: Parameters for /etc/host.conf

order hosts, bind

Specifies in which order the services are accessed for the name resolution. Available arguments are (separated by blank spaces or commas):

hosts: searches the /etc/hosts file

bind: accesses a name server

nis: uses NIS

multi on/off

Defines if a host entered in /etc/hosts can have multiple IP addresses.

nospoof on spoofalert on/off

These parameters influence the name server spoofing but do not exert any influence on the network configuration.

trim domainname

The specified domain name is separated from the host name after host name resolution (as long as the host name includes the domain name). This option is useful only if names from the local domain are in the /etc/hosts file, but should still be recognized with the attached domain names.

Example 17.9: /etc/host.conf
# We have named running
order hosts bind
# Allow multiple address
multi on

17.6.2.13 /etc/nsswitch.conf

The introduction of the GNU C Library 2.0 was accompanied by the introduction of the Name Service Switch (NSS). Refer to the nsswitch.conf(5) man page and The GNU C Library Reference Manual for details.

The order for queries is defined in the file /etc/nsswitch.conf. A sample nsswitch.conf is shown in Example 17.10, “/etc/nsswitch.conf. Comments are preceded by # signs. In this example, the entry under the hosts database means that a request is sent to /etc/hosts (files) via DNS.

Example 17.10: /etc/nsswitch.conf
passwd:     compat
group:      compat

hosts:      files dns
networks:   files dns

services:   db files
protocols:  db files
rpc:        files
ethers:     files
netmasks:   files
netgroup:   files nis
publickey:  files

bootparams: files
automount:  files nis
aliases:    files nis
shadow:     compat

The databases available over NSS are listed in Table 17.3, “Databases Available via /etc/nsswitch.conf”. The configuration options for NSS databases are listed in Table 17.4, “Configuration Options for NSS Databases.

Table 17.3: Databases Available via /etc/nsswitch.conf

aliases

Mail aliases implemented by sendmail; see man 5 aliases.

ethers

Ethernet addresses.

netmasks

List of networks and their subnet masks. Only needed, if you use subnetting.

group

User groups used by getgrent. See also the man page for group.

hosts

Host names and IP addresses, used by gethostbyname and similar functions.

netgroup

Valid host and user lists in the network for controlling access permissions; see the netgroup(5) man page.

networks

Network names and addresses, used by getnetent.

publickey

Public and secret keys for Secure_RPC used by NFS and NIS+.

passwd

User passwords, used by getpwent; see the passwd(5) man page.

protocols

Network protocols, used by getprotoent; see the protocols(5) man page.

rpc

Remote procedure call names and addresses, used by getrpcbyname and similar functions.

services

Network services, used by getservent.

shadow

Shadow passwords of users, used by getspnam; see the shadow(5) man page.

Table 17.4: Configuration Options for NSS Databases

files

directly access files, for example, /etc/aliases

db

access via a database

nis, nisplus

NIS, see also Chapter 3, Using NIS

dns

can only be used as an extension for hosts and networks

compat

can only be used as an extension for passwd, shadow and group

17.6.2.14 /etc/nscd.conf

This file is used to configure nscd (name service cache daemon). See the nscd(8) and nscd.conf(5) man pages. By default, the system entries of passwd, groups and hostsare cached by nscd. This is important for the performance of directory services, like NIS and LDAP, because otherwise the network connection needs to be used for every access to names, groups or hosts.

If the caching for passwd is activated, it usually takes about fifteen seconds until a newly added local user is recognized. Reduce this waiting time by restarting nscd with:

systemctl restart nscd

17.6.2.15 /etc/HOSTNAME

/etc/HOSTNAME contains the fully qualified host name (FQHN). The fully qualified host name is the host name with the domain name attached. This file must contain only one line (in which the host name is set). It is read while the machine is booting.

17.6.3 Testing the Configuration

Before you write your configuration to the configuration files, you can test it. To set up a test configuration, use the ip command. To test the connection, use the ping command.

The command ip changes the network configuration directly without saving it in the configuration file. Unless you enter your configuration in the correct configuration files, the changed network configuration is lost on reboot.

Note
Note: ifconfig and route Are Obsolete

The ifconfig and route tools are obsolete. Use ip instead. ifconfig, for example, limits interface names to 9 characters.

17.6.3.1 Configuring a Network Interface with ip

ip is a tool to show and configure network devices, routing, policy routing, and tunnels.

ip is a very complex tool. Its common syntax is ip OPTIONS OBJECT COMMAND. You can work with the following objects:

link

This object represents a network device.

address

This object represents the IP address of device.

neighbor

This object represents an ARP or NDISC cache entry.

route

This object represents the routing table entry.

rule

This object represents a rule in the routing policy database.

maddress

This object represents a multicast address.

mroute

This object represents a multicast routing cache entry.

tunnel

This object represents a tunnel over IP.

If no command is given, the default command is used (usually list).

Change the state of a device with the command ip link set DEVICE_NAME  . For example, to deactivate device eth0, enter ip link set eth0 down. To activate it again, use ip link set eth0 up.

After activating a device, you can configure it. To set the IP address, use ip addr add IP_ADDRESS + dev DEVICE_NAME. For example, to set the address of the interface eth0 to 192.168.12.154/30 with standard broadcast (option brd), enter ip addr add 192.168.12.154/30 brd + dev eth0.

To have a working connection, you must also configure the default gateway. To set a gateway for your system, enter ip route add gateway_ip_address. To translate one IP address to another, use nat: ip route add nat ip_address via other_ip_address.

To display all devices, use ip link ls. To display the running interfaces only, use ip link ls up. To print interface statistics for a device, enter ip -s link ls device_name. To view addresses of your devices, enter ip addr. In the output of the ip addr, also find information about MAC addresses of your devices. To show all routes, use ip route show.

For more information about using ip, enter ip help or see the ip(8) man page. The help option is also available for all ip subcommands. If, for example, you need help for ip addr, enter ip addr help. Find the ip manual in /usr/share/doc/packages/iproute2/ip-cref.pdf.

17.6.3.2 Testing a Connection with ping

The ping command is the standard tool for testing whether a TCP/IP connection works. It uses the ICMP protocol to send a small data packet, ECHO_REQUEST datagram, to the destination host, requesting an immediate reply. If this works, ping displays a message to that effect. This indicates that the network link is functioning.

ping does more than only test the function of the connection between two computers: it also provides some basic information about the quality of the connection. In Example 17.11, “Output of the Command ping”, you can see an example of the ping output. The second-to-last line contains information about the number of transmitted packets, packet loss, and total time of ping running.

As the destination, you can use a host name or IP address, for example, ping example.com or ping 192.168.3.100. The program sends packets until you press CtrlC.

If you only need to check the functionality of the connection, you can limit the number of the packets with the -c option. For example to limit ping to three packets, enter ping -c 3 example.com.

Example 17.11: Output of the Command ping
ping -c 3 example.com
PING example.com (192.168.3.100) 56(84) bytes of data.
64 bytes from example.com (192.168.3.100): icmp_seq=1 ttl=49 time=188 ms
64 bytes from example.com (192.168.3.100): icmp_seq=2 ttl=49 time=184 ms
64 bytes from example.com (192.168.3.100): icmp_seq=3 ttl=49 time=183 ms
--- example.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2007ms
rtt min/avg/max/mdev = 183.417/185.447/188.259/2.052 ms

The default interval between two packets is one second. To change the interval, ping provides the option -i. For example, to increase the ping interval to ten seconds, enter ping -i 10 example.com.

In a system with multiple network devices, it is sometimes useful to send the ping through a specific interface address. To do so, use the -I option with the name of the selected device, for example, ping -I wlan1 example.com.

For more options and information about using ping, enter ping -h or see the ping (8) man page.

Tip
Tip: Pinging IPv6 Addresses

For IPv6 addresses use the ping6 command. Note, to ping link-local addresses, you must specify the interface with -I. The following command works, if the address is reachable via eth1:

ping6 -I eth1 fe80::117:21ff:feda:a425

17.6.4 Unit Files and Start-Up Scripts

Apart from the configuration files described above, there are also systemd unit files and various scripts that load the network services while the machine is booting. These are started when the system is switched to the multi-user.target target. Some of these unit files and scripts are described in Some Unit Files and Start-Up Scripts for Network Programs. For more information about systemd, see Chapter 14, The systemd Daemon and for more information about the systemd targets, see the man page of systemd.special (man systemd.special).

Some Unit Files and Start-Up Scripts for Network Programs
network.target

network.target is the systemd target for networking, but its mean depends on the settings provided by the system administrator.

For more information, see http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/.

multi-user.target

multi-user.target is the systemd target for a multiuser system with all required network services.

xinetd

Starts xinetd. xinetd can be used to make server services available on the system. For example, it can start vsftpd whenever an FTP connection is initiated.

rpcbind

Starts the rpcbind utility that converts RPC program numbers to universal addresses. It is needed for RPC services, such as an NFS server.

ypserv

Starts the NIS server.

ypbind

Starts the NIS client.

/etc/init.d/nfsserver

Starts the NFS server.

/etc/init.d/postfix

Controls the postfix process.

17.7 Setting Up Bonding Devices

  • Filename: net_bonding.xml
  • ID: sec.bond

For some systems, there is a desire to implement network connections that comply to more than the standard data security or availability requirements of a typical Ethernet device. In these cases, several Ethernet devices can be aggregated to a single bonding device.

The configuration of the bonding device is done by means of bonding module options. The behavior is mainly affected by the mode of the bonding device. By default, this is active-backup which means that a different slave device will become active if the active slave fails. The following bonding modes are avilable:

0 (balance-rr)

Packets are transmitted in round-robin fashion from the first to the last available interface. Provides fault tolerance and load balancing.

1 (active-backup)

Only one network interface is active. If it fails, a different interface becomes active. This setting is the default for SUSE Linux Enterprise Desktop. Provides fault tolerance.

2 (balance-xor)

Traffic is split between all available interfaces based on the following policy: [(source MAC address XOR'd with destination MAC address XOR packet type ID) modulo slave count] Requires support from the switch. Provides fault tolerance and load balancing.

3 (broadcast)

All traffic is broadcast on all interfaces. Requires support from the switch. Provides fault tolerance.

4 (802.3ad)

Aggregates interfaces into groups that share the same speed and duplex settings. Requires ethtool support in the interface drivers, and a switch that supports and is configured for IEEE 802.3ad Dynamic link aggregation. Provides fault tolerance and load balancing.

5 (balance-tlb)

Adaptive transmit load balancing. Requires ethtool support in the interface drivers but no switch support. Provides fault tolerance and load balancing.

6 (balance-alb)

Adaptive load balancing. Requires ethtool support in the interface drivers but no switch support. Provides fault tolerance and load balancing.

For a more detailed description of the modes, see https://www.kernel.org/doc/Documentation/networking/bonding.txt.

Tip
Tip: Bonding and Xen

Using bonding devices is only of interest for machines where you have multiple real network cards available. In most configurations, this means that you should use the bonding configuration only in Dom0. Only if you have multiple network cards assigned to a VM Guest system it may also be useful to set up the bond in a VM Guest.

To configure a bonding device, use the following procedure:

  1. Run YaST › System › Network Settings.

  2. Use Add and change the Device Type to Bond. Proceed with Next.

  3. Select how to assign the IP address to the bonding device. Three methods are at your disposal:

    • No IP Address

    • Dynamic Address (with DHCP or Zeroconf)

    • Statically assigned IP Address

    Use the method that is appropriate for your environment.

  4. In the Bond Slaves tab, select the Ethernet devices that should be included into the bond by activating the related check box.

  5. Edit the Bond Driver Options and choose a bonding mode.

  6. Make sure that the parameter miimon=100 is added to the Bond Driver Options. Without this parameter, the data integrity is not checked regularly.

  7. Click Next and leave YaST with OK to create the device.

17.7.1 Hotplugging of Bonding Slaves

In specific network environments (such as High Availability), there are cases when you need to replace a bonding slave interface with another one. The reason may be a constantly failing network device. The solution is to set up hotplugging of bonding slaves.

The bond is configured as usual (according to man 5 ifcfg-bonding), for example:

ifcfg-bond0
          STARTMODE='auto' # or 'onboot'
          BOOTPROTO='static'
          IPADDR='192.168.0.1/24'
          BONDING_MASTER='yes'
          BONDING_SLAVE_0='eth0'
          BONDING_SLAVE_1='eth1'
          BONDING_MODULE_OPTS='mode=active-backup miimon=100'

The slaves are specified with STARTMODE=hotplug and BOOTPROTO=none:

ifcfg-eth0
          STARTMODE='hotplug'
          BOOTPROTO='none'

ifcfg-eth1
          STARTMODE='hotplug'
          BOOTPROTO='none'

BOOTPROTO=none uses the ethtool options (when provided), but does not set the link up on ifup eth0. The reason is that the slave interface is controlled by the bond master.

STARTMODE=hotplug causes the slave interface to join the bond automatically when it is available.

The udev rules in /etc/udev/rules.d/70-persistent-net.rules need to be changed to match the device by bus ID (udev KERNELS keyword equal to "SysFS BusID" as visible in hwinfo --netcard) instead of by MAC address. This allows replacement of defective hardware (a network card in the same slot but with a different MAC) and prevents confusion when the bond changes the MAC address of all its slaves.

For example:

SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
KERNELS=="0000:00:19.0", ATTR{dev_id}=="0x0", ATTR{type}=="1",
KERNEL=="eth*", NAME="eth0"

At boot time, the systemd network.service does not wait for the hotplug slaves, but for the bond to become ready, which requires at least one available slave. When one of the slave interfaces gets removed (unbind from NIC driver, rmmod of the NIC driver or true PCI hotplug remove) from the system, the kernel removes it from the bond automatically. When a new card is added to the system (replacement of the hardware in the slot), udev renames it using the bus-based persistent name rule to the name of the slave, and calls ifup for it. The ifup call automatically joins it into the bond.

17.8 Setting Up Team Devices for Network Teaming

  • Filename: net_teaming.xml
  • ID: sec.team

The term link aggregation is the general term which describes combining (or aggregating) a network connection to provide a logical layer. Sometimes you find the terms channel teaming, Ethernet bonding, port truncating, etc. which are synonyms and refer to the same concept.

This concept is widely known as bonding and was originally integrated into the Linux kernel (see Section 17.7, “Setting Up Bonding Devices” for the original implementation). The term Network Teaming is used to refer to the new implementation of this concept.

The main difference between bonding and Network Teaming is that teaming supplies a set of small kernel modules responsible for providing an interface for teamd instances. Everything else is handled in user space. This is different from the original bonding implementation which contains all of its functionality exclusively in the kernel. For a comparison refer to Table 17.5, “Feature Comparison between Bonding and Team”.

Table 17.5: Feature Comparison between Bonding and Team
FeatureBondingTeam
broadcast, round-robin TX policyyesyes
active-backup TX policyyesyes
LACP (802.3ad) supportyesyes
hash-based TX policyyesyes
user can set hash functionnoyes
TX load-balancing support (TLB)yesyes
TX load-balancing support for LACPnoyes
Ethtool link monitoringyesyes
ARP link monitoringyesyes
NS/NA (IPV6) link monitoringnoyes
RCU locking on TX/RX pathsnoyes
port prio and stickinessnoyes
separate per-port link monitoring setupnoyes
multiple link monitoring setuplimitedyes
VLAN supportyesyes
multiple device stackingyesyes

Source: http://libteam.org/files/teamdev.pp.pdf

Both implementations, bonding and Network Teaming, can be used in parallel. Network Teaming is an alternative to the existing bonding implementation. It does not replace bonding.

Network Teaming can be used for different use cases. The two most important use cases are explained later and involve:

  • Load balancing between different network devices.

  • Failover from one network device to another in case one of the devices should fail.

Currently, there is no YaST module to support creating a teaming device. You need to configure Network Teaming manually. The general procedure is shown below which can be applied for all your Network Teaming configurations:

Procedure 17.1: General Procedure
  1. Make sure you have all the necessary packages installed. Install the packages libteam-tools, libteamdctl0, and python-libteam.

  2. Create a configuration file under /etc/sysconfig/network/. Usually it will be ifcfg-team0. If you need more than one Network Teaming device, give them ascending numbers.

    This configuration file contains several variables which are explained in the man pages (see man ifcfg and man ifcfg-team). An example configuration can be found in your system in the file /etc/sysconfig/network/ifcfg.template.

  3. Remove the configuration files of the interfaces which will be used for the teaming device (usually ifcfg-eth0 and ifcfg-eth1).

    It is recommended to make a backup and remove both files. Wicked will re-create the configuration files with the necessary parameters for teaming.

  4. Optionally, check if everything is included in Wicked's configuration file:

    wicked show-config
  5. Start the Network Teaming device team0:

    wicked ifup all team0

    In case you need additional debug information, use the option --debug all after the all subcommand.

  6. Check the status of the Network Teaming device. This can be done by the following commands:

    • Get the state of the teamd instance from Wicked:

      wicked ifstatus --verbose team0
    • Get the state of the entire instance:

      teamdctl team0 state
    • Get the systemd state of the teamd instance:

      systemctl status teamd@team0

    Each of them shows a slightly different view depending on your needs.

  7. In case you need to change something in the ifcfg-team0 file afterward, reload its configuration with:

    wicked ifreload team0

Do not use systemctl for starting or stopping the teaming device! Instead, use the wicked command as shown above.

To completely remove the team device, use this procedure:

Procedure 17.2: Removing a Team Device
  1. Stop the Network Teaming device team0:

    wicked ifdown team0
  2. Rename the file /etc/sysconfig/network/ifcfg-team0 to /etc/sysconfig/network/.ifcfg-team0. Inserting a dot in front of the file name makes it invisible for wicked. If you really do not need the configuration anymore, you can also remove the file.

  3. Reload the configuration:

    wicked ifreload all

17.8.1 Use Case: Loadbalancing with Network Teaming

Loadbalancing is used to improve bandwidth. Use the following configuration file to create a Network Teaming device with loadbalancing capabilities. Proceed with Procedure 17.1, “General Procedure” to set up the device. Check the output with teamdctl.

Example 17.12: Configuration for Loadbalancing with Network Teaming
STARTMODE=auto 1
BOOTPROTO=static 2
IPADDRESS="192.168.1.1/24" 2
IPADDR6="fd00:deca:fbad:50::1/64" 2

TEAM_RUNNER="loadbalance" 3
TEAM_LB_TX_HASH="ipv4,ipv6,eth,vlan"
TEAM_LB_TX_BALANCER_NAME="basic"
TEAM_LB_TX_BALANCER_INTERVAL="100"

TEAM_PORT_DEVICE_0="eth0" 4
TEAM_PORT_DEVICE_1="eth1" 4

TEAM_LW_NAME="ethtool" 5
TEAM_LW_ETHTOOL_DELAY_UP="10" 6
TEAM_LW_ETHTOOL_DELAY_DOWN="10" 6

1

Controls the start of the teaming device. The value of auto means, the interface will be set up when the network service is available and will be started automatically on every reboot.

In case you need to control the device yourself (and prevent it from starting automatically), set STARTMODE to manual.

2

Sets a static IP address (here 192.168.1.1 for IPv4 and fd00:deca:fbad:50::1 for IPv6).

If the Network Teaming device should use a dynamic IP address, set BOOTPROTO="dhcp" and remove (or comment) the line with IPADDRESS and IPADDR6.

3

Sets TEAM_RUNNER to loadbalance to activate the loadbalancing mode.

4

Specifies one or more devices which should be aggregated to create the Network Teaming device.

5

Defines a link watcher to monitor the state of subordinate devices. The default value ethtool checks only if the device is up and accessible. This makes this check fast enough. However, it does not check if the device can really send or receive packets.

If you need a higher confidence in the connection, use the arp_ping option. This sends pings to an arbitrary host (configured in the TEAM_LW_ARP_PING_TARGET_HOST variable). Only if the replies are received, the Network Teaming device is considered to be up.

6

Defines the delay in milliseconds between the link coming up (or down) and the runner being notified.

17.8.2 Use Case: Failover with Network Teaming

Failover is used to ensure high availability of a critical Network Teaming device by involving a parallel backup network device. The backup network device is running all the time and takes over if and when the main device fails.

Use the following configuration file to create a Network Teaming device with failover capabilities. Proceed with Procedure 17.1, “General Procedure” to set up the device. Check the output with teamdctl.

Example 17.13: Configuration for DHCP Network Teaming Device
STARTMODE=auto 1
BOOTPROTO=static 2
IPADDR="192.168.1.2/24" 2
IPADDR6="fd00:deca:fbad:50::2/64" 2

TEAM_RUNNER=activebackup 3
TEAM_PORT_DEVICE_0="eth0" 4
TEAM_PORT_DEVICE_1="eth1" 4

TEAM_LW_NAME=ethtool 5
TEAM_LW_ETHTOOL_DELAY_UP="10" 6
TEAM_LW_ETHTOOL_DELAY_DOWN="10" 6

1

Controls the start of the teaming device. The value of auto means, the interface will be set up when the network service is available and will be started automatically on every reboot.

In case you need to control the device yourself (and prevent it from starting automatically), set STARTMODE to manual.

2

Sets a static IP address (here 192.168.1.2 for IPv4 and fd00:deca:fbad:50::2 for IPv6).

If the Network Teaming device should use a dynamic IP address, set BOOTPROTO="dhcp" and remove (or comment) the line with IPADDRESS and IPADDR6.

3

Sets TEAM_RUNNER to activebackup to activate the failover mode.

4

Specifies one or more devices which should be aggregated to create the Network Teaming device.

5

Defines a link watcher to monitor the state of subordinate devices. The default value ethtool checks only if the device is up and accessible. This makes this check fast enough. However, it does not check if the device can really send or receive packets.

If you need a higher confidence in the connection, use the arp_ping option. This sends pings to an arbitrary host (configured in the TEAM_LW_ARP_PING_TARGET_HOST variable). Only if the replies are received, the Network Teaming device is considered to be up.

6

Defines the delay in milliseconds between the link coming up (or down) and the runner being notified.

17.8.3 Use Case: VLAN over Team Device

VLAN is an abbreviation of Virtual Local Area Network. It allows the running of multiple logical (virtual) ethernets over one single physical ethernet. It logically splits the network into different broadcast domains so that packets are only switched between ports that are designated for the same VLAN.

The following use case creates two static VLANs on top of a team device:

  • vlan0, bound to the IP address 192.168.10.1

  • vlan1, bound to the IP address 192.168.20.1

Proceed as follows:

  1. Enable the VLAN tags on your switch. If you want to use loadbalancing for your team device, your switch needs to be capable of Link Aggregation Control Protocol (LACP) (802.3ad). Consult your hardware manual about the details.

  2. Decide if you want to use loadbalancing or failover for your team device. Set up your team device as described in Section 17.8.1, “Use Case: Loadbalancing with Network Teaming” or Section 17.8.2, “Use Case: Failover with Network Teaming”.

  3. In /etc/sysconfig/network create a file ifcfg-vlan0 with the following content:

    STARTMODE="auto"
    BOOTPROTO="static" 1
    IPADDR='192.168.10.1/24' 2
    ETHERDEVICE="team0" 3
    VLAN_ID="0" 4
    VLAN='yes'

    1

    Defines a fixed IP address, specified in IPADDR.

    2

    Defines the IP address, here with its netmask.

    3

    Contains the real interface to use for the VLAN interface, here our team device (team0).

    4

    Specifies a unique ID for the VLAN. Preferably, the file name and the VLAN_ID corresponds to the name ifcfg-vlanVLAN_ID. In our case VLAN_ID is 0 which leads to the filename ifcfg-vlan0.

  4. Copy the file /etc/sysconfig/network/ifcfg-vlan0 to /etc/sysconfig/network/ifcfg-vlan1 and change the following values:

    • IPADDR from 192.168.10.1/24 to 192.168.20.1/24.

    • VLAN_ID from 0 to 1.

  5. Start the two VLANs:

    root # wicked ifup vlan0 vlan1
  6. Check the output of ifconfig:

    root # ifconfig -a
    [...]
    vlan0     Link encap:Ethernet  HWaddr 08:00:27:DC:43:98
              inet addr:192.168.10.1 Bcast:192.168.10.255 Mask:255.255.255.0
              inet6 addr: fe80::a00:27ff:fedc:4398/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 b)  TX bytes:816 (816.0 b)
    
    vlan1     Link encap:Ethernet  HWaddr 08:00:27:DC:43:98
              inet addr:192.168.20.1 Bcast:192.168.20.255 Mask:255.255.255.0
              inet6 addr: fe80::a00:27ff:fedc:4398/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 b)  TX bytes:816 (816.0 b)

18 Printer Operation

  • Filename: printing.xml
  • ID: cha.p

SUSE® Linux Enterprise Desktop supports printing with many types of printers, including remote network printers. Printers can be configured manually or with YaST. For configuration instructions, refer to Section 8.3, “Setting Up a Printer”. Both graphical and command line utilities are available for starting and managing print jobs. If your printer does not work as expected, refer to Section 18.8, “Troubleshooting”.

CUPS (Common Unix Printing System) is the standard print system in SUSE Linux Enterprise Desktop.

Printers can be distinguished by interface, such as USB or network, and printer language. When buying a printer, make sure that the printer has an interface that is supported (USB, Ethernet, or Wi-Fi) and a suitable printer language. Printers can be categorized on the basis of the following three classes of printer languages:

PostScript Printers

PostScript is the printer language in which most print jobs in Linux and Unix are generated and processed by the internal print system. If PostScript documents can be processed directly by the printer and do not need to be converted in additional stages in the print system, the number of potential error sources is reduced.

Currently PostScript is being replaced by PDF as the standard print job format. PostScript+PDF printers that can directly print PDF (in addition to PostScript) already exist. For traditional PostScript printers PDF needs to be converted to PostScript in the printing workflow.

Standard Printers (Languages Like PCL and ESC/P)

In the case of known printer languages, the print system can convert PostScript jobs to the respective printer language with Ghostscript. This processing stage is called interpreting. The best-known languages are PCL (which is mostly used by HP printers and their clones) and ESC/P (which is used by Epson printers). These printer languages are usually supported by Linux and produce an adequate print result. Linux may not be able to address some special printer functions. Except for HP and Epson, there are currently no printer manufacturers who develop Linux drivers and make them available to Linux distributors under an open source license.

Proprietary Printers (Also Called GDI Printers)

These printers do not support any of the common printer languages. They use their own undocumented printer languages, which are subject to change when a new edition of a model is released. Usually only Windows drivers are available for these printers. See Section 18.8.1, “Printers without Standard Printer Language Support” for more information.

Before you buy a new printer, refer to the following sources to check how well the printer you intend to buy is supported:

http://www.linuxfoundation.org/OpenPrinting/

The OpenPrinting home page with the printer database. The database shows the latest Linux support status. However, a Linux distribution can only integrate the drivers available at production time. Accordingly, a printer currently rated as perfectly supported may not have had this status when the latest SUSE Linux Enterprise Desktop version was released. Thus, the databases may not necessarily indicate the correct status, but only provide an approximation.

http://pages.cs.wisc.edu/~ghost/

The Ghostscript Web page.

/usr/share/doc/packages/ghostscript/catalog.devices

List of built-in Ghostscript drivers.

18.1 The CUPS Workflow

The user creates a print job. The print job consists of the data to print plus information for the spooler. This includes the name of the printer or the name of the print queue, and optionally, information for the filter, such as printer-specific options.

At least one dedicated print queue exists for every printer. The spooler holds the print job in the queue until the desired printer is ready to receive data. When the printer is ready, the spooler sends the data through the filter and back-end to the printer.

The filter converts the data generated by the application that is printing (usually PostScript or PDF, but also ASCII, JPEG, etc.) into printer-specific data (PostScript, PCL, ESC/P, etc.). The features of the printer are described in the PPD files. A PPD file contains printer-specific options with the parameters needed to enable them on the printer. The filter system makes sure that options selected by the user are enabled.

If you use a PostScript printer, the filter system converts the data into printer-specific PostScript. This does not require a printer driver. If you use a non-PostScript printer, the filter system converts the data into printer-specific data. This requires a printer driver suitable for your printer. The back-end receives the printer-specific data from the filter then passes it to the printer.

18.2 Methods and Protocols for Connecting Printers

There are various possibilities for connecting a printer to the system. The configuration of CUPS does not distinguish between a local printer and a printer connected to the system over the network. For more information about the printer connection, read the article CUPS in a Nutshell at http://en.opensuse.org/SDB:CUPS_in_a_Nutshell.

Warning
Warning: Changing Cable Connections in a Running System

When connecting the printer to the machine, do not forget that only USB devices can be plugged in or unplugged during operation. To avoid damaging your system or printer, shut down the system before changing any connections that are not USB.

18.3 Installing the Software

PPD (PostScript printer description) is the computer language that describes the properties, like resolution, and options, such as the availability of a duplex unit. These descriptions are required for using various printer options in CUPS. Without a PPD file, the print data would be forwarded to the printer in a raw state, which is usually not desired.

To configure a PostScript printer, the best approach is to get a suitable PPD file. Many PPD files are available in the packages manufacturer-PPDs and OpenPrintingPPDs-postscript. See Section 18.7.3, “PPD Files in Various Packages” and Section 18.8.2, “No Suitable PPD File Available for a PostScript Printer”.

New PPD files can be stored in the directory /usr/share/cups/model/ or added to the print system with YaST as described in Section 8.3.1.1, “Adding Drivers with YaST”. Subsequently, the PPD file can be selected during the printer setup.

Be careful if a printer manufacturer wants you to install entire software packages. This kind of installation may result in the loss of the support provided by SUSE Linux Enterprise Desktop. Also, print commands may work differently and the system may no longer be able to address devices of other manufacturers. For this reason, the installation of manufacturer software is not recommended.

18.4 Network Printers

A network printer can support various protocols, some even concurrently. Although most of the supported protocols are standardized, some manufacturers modify the standard. Manufacturers then provide drivers for only a few operating systems. Unfortunately, Linux drivers are rarely provided. The current situation is such that you cannot act on the assumption that every protocol works smoothly in Linux. Therefore, you may need to experiment with various options to achieve a functional configuration.

CUPS supports the socket, LPD, IPP and smb protocols.

socket

Socket refers to a connection in which the plain print data is sent directly to a TCP socket. Some socket port numbers that are commonly used are 9100 or 35. The device URI (uniform resource identifier) syntax is: socket://IP.OF.THE.PRINTER:PORT, for example: socket://192.168.2.202:9100/.

LPD (Line Printer Daemon)

The LPD protocol is described in RFC 1179. Under this protocol, some job-related data, such as the ID of the print queue, is sent before the actual print data is sent. Therefore, a print queue must be specified when configuring the LPD protocol. The implementations of diverse printer manufacturers are flexible enough to accept any name as the print queue. If necessary, the printer manual should indicate what name to use. LPT, LPT1, LP1 or similar names are often used. The port number for an LPD service is 515. An example device URI is lpd://192.168.2.202/LPT1.

IPP (Internet Printing Protocol)

IPP is a relatively new protocol (1999) based on the HTTP protocol. With IPP, more job-related data is transmitted than with the other protocols. CUPS uses IPP for internal data transmission. The name of the print queue is necessary to configure IPP correctly. The port number for IPP is 631. Example device URIs are ipp://192.168.2.202/ps and ipp://192.168.2.202/printers/ps.

SMB (Windows Share)

CUPS also supports printing on printers connected to Windows shares. The protocol used for this purpose is SMB. SMB uses the port numbers 137, 138 and 139. Example device URIs are smb://user:password@workgroup/smb.example.com/printer, smb://user:password@smb.example.com/printer, and smb://smb.example.com/printer.

The protocol supported by the printer must be determined before configuration. If the manufacturer does not provide the needed information, the command nmap (which comes with the nmap package) can be used to ascertain the protocol. nmap checks a host for open ports. For example:

nmap -p 35,137-139,515,631,9100-10000 IP.OF.THE.PRINTER

18.5 Configuring CUPS with Command Line Tools

CUPS can be configured with command line tools like lpinfo, lpadmin and lpoptions. You need a device URI consisting of a back-end, such as USB, and parameters. To determine valid device URIs on your system use the command lpinfo -v | grep ":/":

# lpinfo -v | grep ":/"
direct usb://ACME/FunPrinter%20XL
network socket://192.168.2.253

With lpadmin the CUPS server administrator can add, remove or manage print queues. To add a print queue, use the following syntax:

lpadmin -p QUEUE -v DEVICE-URI -P PPD-FILE -E

Then the device (-v) is available as QUEUE (-p), using the specified PPD file (-P). This means that you must know the PPD file and the device URI to configure the printer manually.

Do not use -E as the first option. For all CUPS commands, -E as the first argument sets use of an encrypted connection. To enable the printer, -E must be used as shown in the following example:

lpadmin -p ps -v usb://ACME/FunPrinter%20XL -P \
/usr/share/cups/model/Postscript.ppd.gz -E

The following example configures a network printer:

lpadmin -p ps -v socket://192.168.2.202:9100/ -P \
/usr/share/cups/model/Postscript-level1.ppd.gz -E

For more options of lpadmin, see the man page of lpadmin(8).

During printer setup, certain options are set as default. These options can be modified for every print job (depending on the print tool used). Changing these default options with YaST is also possible. Using command line tools, set default options as follows:

  1. First, list all options:

    lpoptions -p QUEUE -l

    Example:

    Resolution/Output Resolution: 150dpi *300dpi 600dpi

    The activated default option is identified by a preceding asterisk (*).

  2. Change the option with lpadmin:

    lpadmin -p QUEUE -o Resolution=600dpi
  3. Check the new setting:

    lpoptions -p QUEUE -l
    
    Resolution/Output Resolution: 150dpi 300dpi *600dpi

When a normal user runs lpoptions, the settings are written to ~/.cups/lpoptions. However, root settings are written to /etc/cups/lpoptions.

18.6 Printing from the Command Line

To print from the command line, enter lp -d QUEUENAME FILENAME, substituting the corresponding names for QUEUENAME and FILENAME.

Some applications rely on the lp command for printing. In this case, enter the correct command in the application's print dialog, usually without specifying FILENAME, for example, lp -d QUEUENAME.

18.7 Special Features in SUSE Linux Enterprise Desktop

Several CUPS features have been adapted for SUSE Linux Enterprise Desktop. Some of the most important changes are covered here.

18.7.1 CUPS and Firewall

After having performed a default installation of SUSE Linux Enterprise Desktop, SuSEFirewall2 is active and the network interfaces are configured to be in the External Zone which blocks incoming traffic. More information about the SuSEFirewall2 configuration is available in Section 15.4, “SuSEFirewall2” and at http://en.opensuse.org/SDB:CUPS_and_SANE_Firewall_settings.

18.7.1.1 CUPS Client

Normally, a CUPS client runs on a regular workstation located in a trusted network environment behind a firewall. In this case it is recommended to configure the network interface to be in the Internal Zone, so the workstation is reachable from within the network.

18.7.1.2 CUPS Server

If the CUPS server is part of a trusted network environment protected by a firewall, the network interface should be configured to be in the Internal Zone of the firewall. It is not recommended to set up a CUPS server in an untrusted network environment unless you ensure that it is protected by special firewall rules and secure settings in the CUPS configuration.

18.7.2 Browsing for Network Printers

CUPS servers regularly announce the availability and status information of shared printers over the network. Clients can access this information to display a list of available printers in printing dialogs, for example. This is called browsing.

CUPS servers announce their print queues over the network either via the traditional CUPS browsing protocol or via Bonjour/DND-SD. To be able to browse network print queues, the service cups-browsed needs to run on all clients that print via CUPS servers. cups-browsed is not started by default. To start it for the active session, use sudo systemctl start cups-browsed. To ensure it is automatically started after booting, enable it with sudo systemctl enable cups-browsed on all clients.

In case browsing does not work after having started cups-browsed, the CUPS server(s) probably announce the network print queues via Bonjour/DND-SD. In this case you need to additionally install the package avahi and start the associated service with sudo systemctl start avahi-daemon on all clients.

18.7.3 PPD Files in Various Packages

The YaST printer configuration sets up the queues for CUPS using the PPD files installed in /usr/share/cups/model. To find the suitable PPD files for the printer model, YaST compares the vendor and model determined during hardware detection with the vendors and models in all PPD files. For this purpose, the YaST printer configuration generates a database from the vendor and model information extracted from the PPD files.

The configuration using only PPD files and no other information sources has the advantage that the PPD files in /usr/share/cups/model can be modified freely. For example, if you have PostScript printers the PPD files can be copied directly to /usr/share/cups/model (if they do not already exist in the manufacturer-PPDs or OpenPrintingPPDs-postscript packages) to achieve an optimum configuration for your printers.

Additional PPD files are provided by the following packages:

  • gutenprint: the Gutenprint driver and its matching PPDs

  • splix: the SpliX driver and its matching PPDs

  • OpenPrintingPPDs-ghostscript: PPDs for Ghostscript built-in drivers

  • OpenPrintingPPDs-hpijs: PPDs for the HPIJS driver for non-HP printers

18.8 Troubleshooting

The following sections cover some of the most frequently encountered printer hardware and software problems and ways to solve or circumvent these problems. Among the topics covered are GDI printers, PPD files and port configuration. Common network printer problems, defective printouts, and queue handling are also addressed.

18.8.1 Printers without Standard Printer Language Support

These printers do not support any common printer language and can only be addressed with special proprietary control sequences. Therefore they can only work with the operating system versions for which the manufacturer delivers a driver. GDI is a programming interface developed by Microsoft* for graphics devices. Usually the manufacturer delivers drivers only for Windows, and since the Windows driver uses the GDI interface these printers are also called GDI printers. The actual problem is not the programming interface, but that these printers can only be addressed with the proprietary printer language of the respective printer model.

Some GDI printers can be switched to operate either in GDI mode or in one of the standard printer languages. See the manual of the printer whether this is possible. Some models require special Windows software to do the switch (note that the Windows printer driver may always switch the printer back into GDI mode when printing from Windows). For other GDI printers there are extension modules for a standard printer language available.

Some manufacturers provide proprietary drivers for their printers. The disadvantage of proprietary printer drivers is that there is no guarantee that these work with the installed print system or that they are suitable for the various hardware platforms. In contrast, printers that support a standard printer language do not depend on a special print system version or a special hardware platform.

Instead of spending time trying to make a proprietary Linux driver work, it may be more cost-effective to purchase a printer which supports a standard printer language (preferably PostScript). This would solve the driver problem once and for all, eliminating the need to install and configure special driver software and obtain driver updates that may be required because of new developments in the print system.

18.8.2 No Suitable PPD File Available for a PostScript Printer

If the manufacturer-PPDs or OpenPrintingPPDs-postscript packages do not contain a suitable PPD file for a PostScript printer, it should be possible to use the PPD file from the driver CD of the printer manufacturer or download a suitable PPD file from the Web page of the printer manufacturer.

If the PPD file is provided as a zip archive (.zip) or a self-extracting zip archive (.exe), unpack it with unzip. First, review the license terms of the PPD file. Then use the cupstestppd utility to check if the PPD file complies with Adobe PostScript Printer Description File Format Specification, version 4.3. If the utility returns FAIL, the errors in the PPD files are serious and are likely to cause major problems. The problem spots reported by cupstestppd should be eliminated. If necessary, ask the printer manufacturer for a suitable PPD file.

18.8.3 Network Printer Connections

Identifying Network Problems

Connect the printer directly to the computer. For test purposes, configure the printer as a local printer. If this works, the problems are related to the network.

Checking the TCP/IP Network

The TCP/IP network and name resolution must be functional.

Checking a Remote lpd

Use the following command to test if a TCP connection can be established to lpd (port 515) on HOST:

netcat -z HOST 515 && echo ok || echo failed

If the connection to lpd cannot be established, lpd may not be active or there may be basic network problems.

Provided that the respective lpd is active and the host accepts queries, run the following command as root to query a status report for QUEUE on remote HOST:

echo -e "\004queue" \
| netcat -w 2 -p 722 HOST 515

If lpd does not respond, it may not be active or there may be basic network problems. If lpd responds, the response should show why printing is not possible on the queue on host. If you receive a response like that shown in Example 18.1, “Error Message from lpd, the problem is caused by the remote lpd.

Example 18.1: Error Message from lpd
lpd: your host does not have line printer access
lpd: queue does not exist
printer: spooling disabled
printer: printing disabled
Checking a Remote cupsd

A CUPS network server can broadcast its queues by default every 30 seconds on UDP port 631. Accordingly, the following command can be used to test whether there is a broadcasting CUPS network server in the network. Make sure to stop your local CUPS daemon before executing the command.

netcat -u -l -p 631 & PID=$! ; sleep 40 ; kill $PID

If a broadcasting CUPS network server exists, the output appears as shown in Example 18.2, “Broadcast from the CUPS Network Server”.

Example 18.2: Broadcast from the CUPS Network Server
ipp://192.168.2.202:631/printers/queue

The following command can be used to test if a TCP connection can be established to cupsd (port 631) on HOST:

netcat -z HOST 631 && echo ok || echo failed

If the connection to cupsd cannot be established, cupsd may not be active or there may be basic network problems. lpstat -h HOST -l -t returns a (possibly very long) status report for all queues on HOST, provided the respective cupsd is active and the host accepts queries.

The next command can be used to test if the QUEUE on HOST accepts a print job consisting of a single carriage-return character. Nothing should be printed. Possibly, a blank page may be ejected.

echo -en "\r" \
| lp -d queue -h HOST
Troubleshooting a Network Printer or Print Server Machine

Spoolers running in a print server machine sometimes cause problems when they need to deal with multiple print jobs. Since this is caused by the spooler in the print server machine, there no way to resolve this issue. As a work-around, circumvent the spooler in the print server machine by addressing the printer connected to the print server machine directly with the TCP socket. See Section 18.4, “Network Printers”.

In this way, the print server machine is reduced to a converter between the various forms of data transfer (TCP/IP network and local printer connection). To use this method, you need to know the TCP port on the print server machine. If the printer is connected to the print server machine and turned on, this TCP port can usually be determined with the nmap utility from the nmap package some time after the print server machine is powered up. For example, nmap  IP-address may deliver the following output for a print server machine:

Port       State       Service
23/tcp     open        telnet
80/tcp     open        http
515/tcp    open        printer
631/tcp    open        cups
9100/tcp   open        jetdirect

This output indicates that the printer connected to the print server machine can be addressed via TCP socket on port 9100. By default, nmap only checks several commonly known ports listed in /usr/share/nmap/nmap-services. To check all possible ports, use the command nmap -p  FROM_PORT-TO_PORT IP_ADDRESS. This may take some time. For further information, refer to the man page of nmap.

Enter a command like

echo -en "\rHello\r\f" | netcat -w 1 IP-address port
cat file | netcat -w 1 IP-address port

to send character strings or files directly to the respective port to test if the printer can be addressed on this port.

18.8.4 Defective Printouts without Error Message

For the print system, the print job is completed when the CUPS back-end completes the data transfer to the recipient (printer). If further processing on the recipient fails (for example, if the printer is not able to print the printer-specific data) the print system does not notice this. If the printer cannot print the printer-specific data, select a PPD file that is more suitable for the printer.

18.8.5 Disabled Queues

If the data transfer to the recipient fails entirely after several attempts, the CUPS back-end, such as USB or socket, reports an error to the print system (to cupsd). The back-end determines how many unsuccessful attempts are appropriate until the data transfer is reported as impossible. As further attempts would be in vain, cupsd disables printing for the respective queue. After eliminating the cause of the problem, the system administrator must re-enable printing with the command cupsenable.

18.8.6 CUPS Browsing: Deleting Print Jobs

If a CUPS network server broadcasts its queues to the client hosts via browsing and a suitable local cupsd is active on the client hosts, the client cupsd accepts print jobs from applications and forwards them to the cupsd on the server. When cupsd on the server accepts a print job, it is assigned a new job number. Therefore, the job number on the client host is different from the job number on the server. As a print job is usually forwarded immediately, it cannot be deleted with the job number on the client host This is because the client cupsd regards the print job as completed when it has been forwarded to the server cupsd.

To delete the print job on the server, use a command such as lpstat -h cups.example.com -o to determine the job number on the server. This assumes that the server has not already completed the print job (that is, sent it completely to the printer). Use the obtained job number to delete the print job on the server as follows:

cancel -h cups.example.com QUEUE-JOBNUMBER

18.8.7 Defective Print Jobs and Data Transfer Errors

If you switch the printer off or shut down the computer during the printing process, print jobs remain in the queue. Printing resumes when the computer (or the printer) is switched back on. Defective print jobs must be removed from the queue with cancel.

If a print job is corrupted or an error occurs in the communication between the host and the printer, the printer cannot process the data correctly and prints numerous sheets of paper with unintelligible characters. To fix the problem, follow these steps:

  1. To stop printing, remove all paper from ink jet printers or open the paper trays of laser printers. High-quality printers have a button for canceling the current printout.

  2. The print job may still be in the queue, because jobs are only removed after they are sent completely to the printer. Use lpstat -o or lpstat -h cups.example.com -o to check which queue is currently printing. Delete the print job with cancel QUEUE-JOBNUMBER or cancel -h cups.example.com QUEUE-JOBNUMBER.

  3. Some data may still be transferred to the printer even though the print job has been deleted from the queue. Check if a CUPS back-end process is still running for the respective queue and terminate it.

  4. Reset the printer completely by switching it off for some time. Then insert the paper and turn on the printer.

18.8.8 Debugging CUPS

Use the following generic procedure to locate problems in CUPS:

  1. Set LogLevel debug in /etc/cups/cupsd.conf.

  2. Stop cupsd.

  3. Remove /var/log/cups/error_log* to avoid having to search through very large log files.

  4. Start cupsd.

  5. Repeat the action that led to the problem.

  6. Check the messages in /var/log/cups/error_log* to identify the cause of the problem.

18.8.9 For More Information

In-depth information about printing on SUSE Linux is presented in the openSUSE Support Database at http://en.opensuse.org/Portal:Printing. Solutions to many specific problems are presented in the SUSE Knowledgebase (http://www.suse.com/support/). Locate the relevant articles with a text search for CUPS.

19 The X Window System

  • Filename: x11.xml
  • ID: cha.x11

The X Window System (X11) is the de facto standard for graphical user interfaces in Unix. X is network-based, enabling applications started on one host to be displayed on another host connected over any kind of network (LAN or Internet). This chapter provides basic information on the X configuration, and background information about the use of fonts in SUSE® Linux Enterprise Desktop.

Usually, the X Window System needs no configuration. The hardware is dynamically detected during X start-up. The use of xorg.conf is therefore deprecated. If you still need to specify custom options to change the way X behaves, you can still do so by modifying configuration files under /etc/X11/xorg.conf.d/.

19.1 Installing and Configuring Fonts

  • Filename: x11_fonts.xml
  • ID: sec.x11.fontsys

Fonts in Linux can be categorized into two parts:

Outline or Vector Fonts

Contains a mathematical description as drawing instructions about the shape of a glyph. As such, each glyph can be scaled to arbitrary sizes without loss of quality. Before such a font (or glyph) can be used, the mathematical descriptions need to be transformed into a raster (grid). This process is called font rasterization. Font hinting (embedded inside the font) improves and optimizes the rendering result for a particular size. Rasterization and hinting is done with the FreeType library.

Common formats under Linux are PostScript Type 1 and Type 2, TrueType, and OpenType.

Bitmap or Raster Fonts

Consists of an array of pixels designed for a specific font size. Bitmap fonts are extremely fast and simple to render. However, compared to vector fonts, bitmap fonts cannot be scaled without losing quality. As such, these fonts are usually distributed in different sizes. These days, bitmap fonts are still used in the Linux console and sometimes in terminals.

Under Linux, Portable Compiled Format (PCF) or Glyph Bitmap Distribution Format (BDF) are the most common formats.

The appearance of these fonts can be influenced by two main aspects:

  • choosing a suitable font family,

  • rendering the font with an algorithm that achieves results comfortable for the receiver's eyes.

The last point is only relevant to vector fonts. Although the above two points are highly subjective, some defaults need to be created.

Linux font rendering systems consist of several libraries with different relations. The basic font rendering library is FreeType, which converts font glyphs of supported formats into optimized bitmap glyphs. The rendering process is controlled by an algorithm and its parameters (which may be subject to patent issues).

Every program or library which uses FreeType should consult the Fontconfig library. This library gathers font configuration from users and from the system. When a user amends his Fontconfig setting, this change will result in Fontconfig-aware applications.

More sophisticated OpenType shaping needed for scripts such as Arabic, Han or Phags-Pa and other higher level text processing is done using Harfbuzz or Pango.

19.1.1 Showing Installed Fonts

To get an overview about which fonts are installed on your system, ask the commands rpm or fc-list. Both will give you a good answer, but may return a different list depending on system and user configuration:

rpm

Invoke rpm to see which software packages containing fonts are installed on your system:

rpm -qa '*fonts*'

Every font package should satisfy this expression. However, the command may return some false positives like fonts-config (which is neither a font nor does it contain fonts).

fc-list

Invoke fc-list to get an overview about what font families can be accessed, whether they are installed on the system or in your home:

fc-list ':' family
Note
Note: Command fc-list

The command fc-list is a wrapper to the Fontconfig library. It is possible to query a lot of interesting information from Fontconfig—or, to be more precise, from its cache. See man 1 fc-list for more details.

19.1.2 Viewing Fonts

If you want to know what an installed font family looks like, either use the command ftview (package ft2demos) or visit http://fontinfo.opensuse.org/. For example, to display the FreeMono font in 14 point, use ftview like this:

ftview 14 /usr/share/fonts/truetype/FreeMono.ttf

If you need further information, go to http://fontinfo.opensuse.org/ to find out which styles (regular, bold, italic, etc.) and languages are supported.

19.1.3 Querying Fonts

To query which font is used when a pattern is given, use the fc-match command.

For example, if your pattern contains an already installed font, fc-match returns the file name, font family, and the style:

tux > fc-match 'Liberation Serif'
LiberationSerif-Regular.ttf: "Liberation Serif" "Regular"

If the desired font does not exist on your system, Fontconfig's matching rules take place and try to find the most similar fonts available. This means, your request is substituted:

tux > fc-match 'Foo Family'
DejaVuSans.ttf: "DejaVu Sans" "Book"

Fontconfig supports aliases: a name is substituted with another family name. A typical example are the generic names such as sans-serif, serif, and monospace. These alias names can be substituted by real family names or even a preference list of family names:

tux > for font in serif sans mono; do fc-match "$font" ; done
DejaVuSerif.ttf: "DejaVu Serif" "Book"
DejaVuSans.ttf: "DejaVu Sans" "Book"
DejaVuSansMono.ttf: "DejaVu Sans Mono" "Book"

The result may vary on your system, depending on which fonts are currently installed.

Note
Note: Similarity Rules according to Fontconfig

Fontconfig always returns a real family (if at least one is installed) according to the given request, as similar as possible. Similarity depends on Fontconfig's internal metrics and on the user's or administrator's Fontconfig settings.

19.1.4 Installing Fonts

To install a new font there are these major methods:

  1. Manually install the font files such as *.ttf or *.otf to a known font directory. If it needs to be system-wide, use the standard directory /usr/share/fonts. For installation in your home directory, use ~/.config/fonts.

    If you want to deviate from the standard directories, Fontconfig allows you to choose another one. Let Fontconfig know by using the <dir> element, see Section 19.1.5.2, “Diving into Fontconfig XML” for details.

  2. Install fonts using zypper. Lots of fonts are already available as a package, be it on your SUSE distribution or in the M17N:fonts repository. Add the repository to your list using the following command. For example, to add a repository for SLE 12:

    sudo zypper ar
         http://download.opensuse.org/repositories/M17N:/fonts/SLE_12_SP3/

    To search for your FONT_FAMILY_NAME use this command:

    sudo zypper se 'FONT_FAMILY_NAME*fonts'

19.1.5 Configuring the Appearance of Fonts

Depending on the rendering medium, and font size, the result may be unsatisfactory. For example, an average monitor these days has a resolution of 100dpi which makes pixels too big and glyphs look clunky.

There are several algorithms available to deal with low resolutions, such as anti-aliasing (grayscale smoothing), hinting (fitting to the grid), or subpixel rendering (tripling resolution in one direction). These algorithms can also differ from one font format to another.

Important
Important: Patent Issues with Subpixel Rendering

Subpixel rendering is not used in SUSE distributions. Although FreeType2 has support for this algorithm, it is covered by several patents expiring at the end of the year 2019. Therefore, setting subpixel rendering options in Fontconfig has no effect unless the system has a FreeType2 library with subpixel rendering compiled in.

Via Fontconfig, it is possible to select a rendering algorithms for every font individually or for a set of fonts.

19.1.5.1 Configuring Fonts via sysconfig

SUSE Linux Enterprise Desktop comes with a sysconfig layer above Fontconfig. This is a good starting point for experimenting with font configuration. To change the default settings, edit the configuration file /etc/sysconfig/fonts-config. (or use the YaST sysconfig module). After you have edited the file, run fonts-config:

sudo /usr/sbin/fonts-config

Restart the application to make the effect visible. Keep in mind the following issues:

  • A few applications do need not to be restarted. For example, Firefox re-reads Fontconfig configuration from time to time. Newly created or reloaded tabs get new font configurations later.

  • The fonts-config script is called automatically after every package installation or removal (if not, it is a bug of the font software package).

  • Every sysconfig variable can be temporarily overridden by the fonts-config command line option. See fonts-config --help for details.

There are several sysconfig variables which can be altered. See man 1 fonts-config or the help page of the YaST sysconfig module. The following variables are examples:

Usage of Rendering Algorithms

Consider FORCE_HINTSTYLE, FORCE_AUTOHINT, FORCE_BW, FORCE_BW_MONOSPACE, USE_EMBEDDED_BITMAPS and EMBEDDED_BITMAP_LANGAGES

Preference Lists of Generic Aliases

Use PREFER_SANS_FAMILIES, PREFER_SERIF_FAMILIES, PREFER_MONO_FAMILIES and SEARCH_METRIC_COMPATIBLE

The following list provides some configuration examples, sorted from the most readable fonts (more contrast) to most beautiful (more smoothed).

Bitmap Fonts

Prefer bitmap fonts via the PREFER_*_FAMILIES variables. Follow the example in the help section for these variables. Be aware that these fonts are rendered black and white, not smoothed and that bitmap fonts are available in several sizes only. Consider using

SEARCH_METRIC_COMPATIBLE="no"

to disable metric compatibility-driven family name substitutions.

Scalable Fonts Rendered Black and White

Scalable fonts rendered without antialiasing can result in a similar outcome to bitmap fonts, while maintaining font scalability. Use well hinted fonts like the Liberation families. Unfortunately, there is a lack of well hinted fonts though. Set the following variable to force this method:

FORCE_BW="yes"
Monospaced Fonts Rendered Black and White

Render monospaced fonts without antialiasing only, otherwise use default settings:

FORCE_BW_MONOSPACE="yes"
Default Settings

All fonts are rendered with antialiasing. Well hinted fonts will be rendered with the byte code interpreter (BCI) and the rest with autohinter (hintstyle=hintslight). Leave all relevant sysconfig variables to the default setting.

CFF Fonts

Use fonts in CFF format. They can be considered also more readable than the default TrueType fonts given the current improvements in FreeType2. Try them out by following the example of PREFER_*_FAMILIES. Possibly make them more dark and bold with:

SEARCH_METRIC_COMPATIBLE="no"

as they are rendered by hintstyle=hintslight by default. Also consider using:

SEARCH_METRIC_COMPATIBLE="no"
Autohinter Exclusively

Even for a well hinted font, use FreeType2's autohinter. That can lead to thicker, sometimes fuzzier letter shapes with lower contrast. Set the following variable to activate this:

FORCE_AUTOHINTER="yes"

Use FORCE_HINTSTYLE to control the level of hinting.

19.1.5.2 Diving into Fontconfig XML

Fontconfig's configuration format is the eXtensible Markup Language (XML). These few examples are not a complete reference, but a brief overview. Details and other inspiration can be found in man 5 fonts-conf or in /etc/fonts/conf.d/.

The central Fontconfig configuration file is /etc/fonts/fonts.conf, which—along other work—includes the whole /etc/fonts/conf.d/ directory. To customize Fontconfig, there are two places where you can insert your changes:

Fontconfig Configuration Files
  1. System-wide changes.  Edit the file /etc/fonts/local.conf (by default, it contains an empty fontconfig element).

  2. User-specific changes.  Edit the file ~/.config/fontconfig/fonts.conf. Place Fontconfig configuration files in the ~/.config/fontconfig/conf.d/ directory.

User-specific changes overwrite any system-wide settings.

Note
Note: Deprecated User Configuration File

The file ~/.fonts.conf is marked as deprecated and should not be used anymore. Use ~/.config/fontconfig/fonts.conf instead.

Every configuration file needs to have a fontconfig element. As such, the minimal file looks like this:

<?xml version="1.0"?>
   <!DOCTYPE fontconfig SYSTEM "fonts.dtd">
   <fontconfig>
   <!-- Insert your changes here -->
   </fontconfig>

If the default directories are not enough, insert the dir element with the respective directory:

<dir>/usr/share/fonts2</dir>

Fontconfig searches recursively for fonts.

Font-rendering algorithms can be chosen with following Fontconfig snippet (see Example 19.1, “Specifying Rendering Algorithms”):

Example 19.1: Specifying Rendering Algorithms
<match target="font">
 <test name="family">
  <string>FAMILY_NAME</string>
 </test>
 <edit name="antialias" mode="assign">
  <bool>true</bool>
 </edit>
 <edit name="hinting" mode="assign">
  <bool>true</bool>
 </edit>
 <edit name="autohint" mode="assign">
  <bool>false</bool>
 </edit>
 <edit name="hintstyle" mode="assign">
  <const>hintfull</const>
 </edit>
</match>

Various properties of fonts can be tested. For example, the <test> element can test for the font family (as shown in the example), size interval, spacing, font format, and others. When abandoning <test> completely, all <edit> elements will be applied to every font (global change).

Example 19.2: Aliases and Family Name Substitutions
Rule 1
<alias>
 <family>Alegreya SC</family>
 <default>
  <family>serif</family>
 </default>
</alias>
Rule 2
<alias>
 <family>serif</family>
 <prefer>
  <family>Droid Serif</family>
 </prefer>
</alias>
Rule 3
<alias>
 <family>serif</family>
 <accept>
  <family>STIXGeneral</family>
 </accept>
</alias>

The rules from Example 19.2, “Aliases and Family Name Substitutions” create a prioritized family list (PFL). Depending on the element, different actions are performed:

<default> from Rule 1

This rule adds a serif family name at the end of the PFL.

<prefer> from Rule 2

This rule adds Droid Serif just before the first occurrence of serif in the PFL, whenever Alegreya SC is in PFL.

<accept> from Rule 3

This rule adds a STIXGeneral family name just after the first occurrence of the serif family name in the PFL.

Putting this together, when snippets occur in the order Rule 1 - Rule 2 - Rule 3 and the user requests Alegreya SC, then the PFL is created as depicted in Table 19.1, “Generating PFL from Fontconfig rules”.

Table 19.1: Generating PFL from Fontconfig rules

Order

Current PFL

Request

Alegreya SC

Rule 1

Alegreya SC, serif

Rule 2

Alegreya SC, Droid Serif, serif

Rule 3

Alegreya SC, Droid Serif, serif, STIXGeneral

In Fontconfig's metrics, the family name has the highest priority over other patterns, like style, size, etc. Fontconfig checks which family is currently installed on the system. If Alegreya SC is installed, then Fontconfig returns it. If not, it asks for Droid Serif, etc.

Be careful. When the order of Fontconfig snippets is changed, Fontconfig can return different results, as depicted in Table 19.2, “Results from Generating PFL from Fontconfig Rules with Changed Order”.

Table 19.2: Results from Generating PFL from Fontconfig Rules with Changed Order

Order

Current PFL

Note

Request

Alegreya SC

Same request performed.

Rule 2

Alegreya SC

serif not in FPL, nothing is substituted

Rule 3

Alegreya SC

serif not in FPL, nothing is substituted

Rule 1

Alegreya SC, serif

Alegreya SC present in FPL, substitution is performed

Note
Note: Implication.

Think of the <default> alias as a classification or inclusion of this group (if not installed). As the example shows, <default> should always precede the <prefer> and <accept> aliases of that group.

<default> classification is not limited to the generic aliases serif, sans-serif and monospace. See /usr/share/fontconfig/conf.avail/30-metric-aliases.conf for a complex example.

The following Fontconfig snippet in Example 19.3, “Aliases and Family Name Substitutions” creates a serif group. Every family in this group could substitute others when a former font is not installed.

Example 19.3: Aliases and Family Name Substitutions
<alias>
 <family>Alegreya SC</family>
 <default>
  <family>serif</family>
 </default>
</alias>
<alias>
 <family>Droid Serif</family>
 <default>
  <family>serif</family>
 </default>
</alias>
<alias>
 <family>STIXGeneral</family>
 <default>
  <family>serif</family>
 </default>
</alias>
<alias>
 <family>serif</family>
 <accept>
  <family>Droid Serif</family>
  <family>STIXGeneral</family>
  <family>Alegreya SC</family>
 </accept>
</alias>

Priority is given by the order in the <accept> alias. Similarly, stronger <prefer> aliases can be used.

Example 19.2, “Aliases and Family Name Substitutions” is expanded by Example 19.4, “Aliases and Family Names Substitutions”.

Example 19.4: Aliases and Family Names Substitutions
Rule 4
<alias>
 <family>serif</family>
 <accept>
  <family>Liberation Serif</family>
 </accept>
</alias>
Rule 5
<alias>
 <family>serif</family>
 <prefer>
  <family>DejaVu Serif</family>
 </prefer>
</alias>

The expanded configuration from Example 19.4, “Aliases and Family Names Substitutions” would lead to the following PFL evolution:

Table 19.3: Results from Generating PFL from Fontconfig Rules

Order

Current PFL

Request

Alegreya SC

Rule 1

Alegreya SC, serif

Rule 2

Alegreya SC, Droid Serif, serif

Rule 3

Alegreya SC, Droid Serif, serif, STIXGeneral

Rule 4

Alegreya SC, Droid Serif, serif, Liberation Serif, STIXGeneral

Rule 5

Alegreya SC, Droid Serif, DejaVu Serif, serif, Liberation Serif, STIXGeneral

Note
Note: Implications.
  • In case multiple <accept> declarations for the same generic name exist, the declaration that is parsed last wins. If possible, do not use <accept> after user (/etc/fonts/conf.d/*-user.conf) when creating a system-wide configuration.

  • In case multiple <prefer declarations for the same generic name exist, the declaration that is parsed last wins. If possible, do not use <prefer> before user in the system-wide configuration.

  • Every <prefer> declaration overwrites <accept> declarations for the same generic name. If the administrator wants to allow the user to utilize even <accept> and not only <prefer>,the administrator should not use <prefer> in the system-wide configuration. On the other hand, as users mostly use <prefer>, this should not have any detrimental effect. We also see the use of <prefer> also in system wide configurations.

19.2 For More Information

Install the packages xorg-docs to get more in-depth information about X11. man 5 xorg.conf tells you more about the format of the manual configuration (if needed). More information on the X11 development can be found on the project's home page at http://www.x.org.

Drivers are found in xf86-video-* packages, for example xf86-video-nv. Many of the drivers delivered with these packages are described in detail in the related manual page. For example, if you use the nv driver, find more information about this driver in man 4 nv.

Information about third-party drivers should be available in /usr/share/doc/packages/<package_name>. For example, the documentation of x11-video-nvidiaG03 is available in /usr/share/doc/packages/x11-video-nvidiaG03 after the package was installed.

20 Accessing File Systems with FUSE

  • Filename: fuse.xml
  • ID: cha.fuse
Abstract

FUSE is the acronym for file system in user space. This means you can configure and mount a file system as an unprivileged user. Normally, you need to be root for this task. FUSE alone is a kernel module. Combined with plug-ins, it allows you to extend FUSE to access almost all file systems like remote SSH connections, ISO images, and more.

20.1 Configuring FUSE

Before you can use FUSE, you need to install the package fuse. Depending which file system you want to use, you need additional plug-ins available as separate packages.

Generally you do not need to configure FUSE. However, it is a good idea to create a directory where all your mount points are combined. For example, you can create a directory ~/mounts and insert your subdirectories for your different file systems there.

20.2 Mounting an NTFS Partition

NTFS, the New Technology File System, is the default file system of Windows. Since under normal circumstances the unprivileged user cannot mount NTFS block devices using the external FUSE library, the process of mounting a Windows partition described below requires root privileges.

  1. Become root and install the package ntfs-3g.

  2. Create a directory that is to be used as a mount point, for example ~/mounts/windows.

  3. Find out which Windows partition you need. Use YaST and start the partitioner module to see which partition belongs to Windows, but do not modify anything. Alternatively, become root and execute /sbin/fdisk -l. Look for partitions with a partition type of HPFS/NTFS.

  4. Mount the partition in read-write mode. Replace the placeholder DEVICE with your respective Windows partition:

    ntfs-3g /dev/DEVICE MOUNT POINT

    To use your Windows partition in read-only mode, append -o ro:

    ntfs-3g /dev/DEVICE MOUNT POINT -o ro

    The command ntfs-3g uses the current user (UID) and group (GID) to mount the given device. If you want to set the write permissions to a different user, use the command id USER to get the output of the UID and GID values. Set it with:

    id tux
    uid=1000(tux) gid=100(users) groups=100(users),16(dialout),33(video)
    ntfs-3g /dev/DEVICE MOUNT POINT -o uid=1000,gid=100

    Find additional options in the man page.

To unmount the resource, run fusermount -u MOUNT POINT.

20.3 For More Information

See the home page http://fuse.sourceforge.net of FUSE for more information.

21 Managing Kernel Modules

  • Filename: kernel_modules.xml
  • ID: cha.mod

Although Linux is a monolithic kernel, it can be extended using kernel modules. These are special objects that can be inserted into the kernel and removed on demand. In practical terms, kernel modules make it possible to add and remove drivers and interfaces that are not included in the kernel itself. Linux provides several commands for managing kernel modules.

21.1 Listing Loaded Modules with lsmod and modinfo

Use the lsmod command to view what kernel modules are currently loaded. The output of the command may look as follows:

tux >  lsmod
Module                  Size  Used by
snd_usb_audio         188416  2
snd_usbmidi_lib        36864  1 snd_usb_audio
hid_plantronics        16384  0
snd_rawmidi            36864  1 snd_usbmidi_lib
snd_seq_device         16384  1 snd_rawmidi
fuse                  106496  3
nfsv3                  45056  1
nfs_acl                16384  1 nfsv3

The output is divided into three columns. The Module column lists the names of the loaded modules, while the Size column displays the size of each module. The Used by column shows the number of referring modules and their names. Note that this list may be incomplete.

To view detailed information about a specific kernel module, use the modinfo MODULE_NAME command, where MODULE_NAME is the name of the desired kernel module. Note that the modinfo binary resides in the /sbin directory that is not in the user's PATH environment variable. This means that you must specify the full path to the binary when running modinfo command as a regular user:

$ /sbin/modinfo kvm
filename:       /lib/modules/4.4.57-18.3-default/kernel/arch/x86/kvm/kvm.ko
license:        GPL
author:         Qumranet
srcversion:     BDFD8098BEEA517CB75959B
depends:        irqbypass
intree:         Y
vermagic:       4.4.57-18.3-default SMP mod_unload modversions
signer:         openSUSE Secure Boot Signkey
sig_key:        03:32:FA:9C:BF:0D:88:BF:21:92:4B:0D:E8:2A:09:A5:4D:5D:EF:C8
sig_hashalgo:   sha256
parm:           ignore_msrs:bool
parm:           min_timer_period_us:uint
parm:           kvmclock_periodic_sync:bool
parm:           tsc_tolerance_ppm:uint
parm:           lapic_timer_advance_ns:uint
parm:           halt_poll_ns:uint
parm:           halt_poll_ns_grow:int
parm:           halt_poll_ns_shrink:int

21.2 Adding and Removing Kernel Modules

While it is possible to use insmod and rmmod to add and remove kernel modules, it is recommended to use the modprobe tool instead. modprobe offers several important advantages, including automatic dependency resolution and blacklisting.

When used without any parameters, the modprobe command installs a specified kernel module. modprobe must be run with root privileges:

tux > sudo modprobe acpi

To remove a kernel module, use the -r parameter:

sudo modprobe -r acpi

21.2.1 Loading Kernel Modules Automatically on Boot

Instead of loading kernel modules manually, you can load them automatically during the boot process using the system-modules-load.service service. To enable a kernel module, add a .conf file to the /etc/modules-load.d/ directory. It is good practice to give the configuration file the same name as the module, for example:

/etc/modules-load.d/rt2800usb.conf

The configuration file must contain the name of the desired kernel module (for example, rt2800usb).

The described technique allows you to load kernel modules without any parameters. If you need to load a kernel module with specific options, add a configuration file to the /etc/modprobe.d/ directory instead. The file must have the .conf extension. The name of the file should adhere to the following naming convention: priority-modulename.conf, for example: 50-thinkfan.conf. The configuration file must contain the name of the kernel module and the desired parameters. You can use the example command below to create a configuration file containing the name of the kernel module and its parameters:

echo "options thinkpad_acpi fan_control=1" | sudo tee /etc/modprobe.d/thinkfan.conf
Note
Note: Loading Kernel Modules

Most kernel modules are loaded by the system automatically when a device is detected or user space requests specific functionality. Thus, adding modules manually to /etc/modules-load.d/ is rarely required.

21.2.2 Blacklisting Kernel Modules with modprobe

Blacklisting a kernel module prevents it from loading during the boot process. This can be useful when you want to disable a module that you suspect is causing problems on your system. Note that you can still load blacklisted kernel modules manually using the insmod or modprobe tools.

To blacklist a module, add the blacklist MODULE_NAME line to the /etc/modprobe.d/50-blacklist.conf file. For example:

blacklist nouveau

Run the mkinitrd command as root to generate a new initrd image, then reboot your machine. These steps can be performed using the following command:

su
echo "blacklist nouveau" >> /etc/modprobe.d/50-blacklist.conf && mkinitrd && reboot

To disable a kernel module temporarily only, blacklist it on-the-fly during the boot. To do this, press the E key when you see the boot screen. This drops you into a minimal editor that allows you to modify boot parameters. Locate the line that looks as follows:

linux /boot/vmlinuz...splash= silent quiet showopts

Add the modprobe.blacklist=MODULE_NAME command to the end of the line. For example:

linux /boot/vmlinuz...splash= silent quiet showopts modprobe.blacklist=nouveau

Press F10 or CtrlX to boot with the specified configuration.

To blacklist a kernel module permanently via GRUB, open the /etc/default/grub file for editing, and add the modprobe.blacklist=MODULE_NAME option to the GRUB_CMD_LINUX command. Then run the sudo grub2-mkconfig -o /boot/grub2/grub.cfg command to enable the changes.

22 Dynamic Kernel Device Management with udev

  • Filename: udev.xml
  • ID: cha.udev

The kernel can add or remove almost any device in a running system. Changes in the device state (whether a device is plugged in or removed) need to be propagated to user space. Devices need to be configured when they are plugged in and recognized. Users of a certain device need to be informed about any changes in this device's recognized state. udev provides the needed infrastructure to dynamically maintain the device node files and symbolic links in the /dev directory. udev rules provide a way to plug external tools into the kernel device event processing. This allows you to customize udev device handling by adding certain scripts to execute as part of kernel device handling, or request and import additional data to evaluate during device handling.

22.1 The /dev Directory

The device nodes in the /dev directory provide access to the corresponding kernel devices. With udev, the /dev directory reflects the current state of the kernel. Every kernel device has one corresponding device file. If a device is disconnected from the system, the device node is removed.

The content of the /dev directory is kept on a temporary file system and all files are rendered at every system start-up. Manually created or modified files do not, by design, survive a reboot. Static files and directories that should always be in the /dev directory regardless of the state of the corresponding kernel device can be created with systemd-tmpfiles. The configuration files are found in /usr/lib/tmpfiles.d/ and /etc/tmpfiles.d/; for more information, see the systemd-tmpfiles(8) man page.

22.2 Kernel uevents and udev

The required device information is exported by the sysfs file system. For every device the kernel has detected and initialized, a directory with the device name is created. It contains attribute files with device-specific properties.

Every time a device is added or removed, the kernel sends a uevent to notify udev of the change. The udev daemon reads and parses all provided rules from the /etc/udev/rules.d/*.rules files once at start-up and keeps them in memory. If rules files are changed, added or removed, the daemon can reload the in-memory representation of all rules with the command udevadm control --reload. For more details on udev rules and their syntax, refer to Section 22.6, “Influencing Kernel Device Event Handling with udev Rules”.

Every received event is matched against the set of provides rules. The rules can add or change event environment keys, request a specific name for the device node to create, add symbolic links pointing to the node or add programs to run after the device node is created. The driver core uevents are received from a kernel netlink socket.

22.3 Drivers, Kernel Modules and Devices

The kernel bus drivers probe for devices. For every detected device, the kernel creates an internal device structure while the driver core sends a uevent to the udev daemon. Bus devices identify themselves by a specially-formatted ID, which tells what kind of device it is. Usually these IDs consist of vendor and product ID and other subsystem-specific values. Every bus has its own scheme for these IDs, called MODALIAS. The kernel takes the device information, composes a MODALIAS ID string from it and sends that string along with the event. For a USB mouse, it looks like this:

MODALIAS=usb:v046DpC03Ed2000dc00dsc00dp00ic03isc01ip02

Every device driver carries a list of known aliases for devices it can handle. The list is contained in the kernel module file itself. The program depmod reads the ID lists and creates the file modules.alias in the kernel's /lib/modules directory for all currently available modules. With this infrastructure, module loading is as easy as calling modprobe for every event that carries a MODALIAS key. If modprobe $MODALIAS is called, it matches the device alias composed for the device with the aliases provided by the modules. If a matching entry is found, that module is loaded. All this is automatically triggered by udev.

22.4 Booting and Initial Device Setup

All device events happening during the boot process before the udev daemon is running are lost, because the infrastructure to handle these events resides on the root file system and is not available at that time. To cover that loss, the kernel provides a uevent file located in the device directory of every device in the sysfs file system. By writing add to that file, the kernel resends the same event as the one lost during boot. A simple loop over all uevent files in /sys triggers all events again to create the device nodes and perform device setup.

As an example, a USB mouse present during boot may not be initialized by the early boot logic, because the driver is not available at that time. The event for the device discovery was lost and failed to find a kernel module for the device. Instead of manually searching for connected devices, udev requests all device events from the kernel after the root file system is available, so the event for the USB mouse device runs again. Now it finds the kernel module on the mounted root file system and the USB mouse can be initialized.

From user space, there is no visible difference between a device coldplug sequence and a device discovery during runtime. In both cases, the same rules are used to match and the same configured programs are run.

22.5 Monitoring the Running udev Daemon

The program udevadm monitor can be used to visualize the driver core events and the timing of the udev event processes.

UEVENT[1185238505.276660] add   /devices/pci0000:00/0000:00:1d.2/usb3/3-1 (usb)
UDEV  [1185238505.279198] add   /devices/pci0000:00/0000:00:1d.2/usb3/3-1 (usb)
UEVENT[1185238505.279527] add   /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0 (usb)
UDEV  [1185238505.285573] add   /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0 (usb)
UEVENT[1185238505.298878] add   /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10 (input)
UDEV  [1185238505.305026] add   /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10 (input)
UEVENT[1185238505.305442] add   /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10/mouse2 (input)
UEVENT[1185238505.306440] add   /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10/event4 (input)
UDEV  [1185238505.325384] add   /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10/event4 (input)
UDEV  [1185238505.342257] add   /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10/mouse2 (input)

The UEVENT lines show the events the kernel has sent over netlink. The UDEV lines show the finished udev event handlers. The timing is printed in microseconds. The time between UEVENT and UDEV is the time udev took to process this event or the udev daemon has delayed its execution to synchronize this event with related and already running events. For example, events for hard disk partitions always wait for the main disk device event to finish, because the partition events may rely on the data that the main disk event has queried from the hardware.

udevadm monitor --env shows the complete event environment:

ACTION=add
DEVPATH=/devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10
SUBSYSTEM=input
SEQNUM=1181
NAME="Logitech USB-PS/2 Optical Mouse"
PHYS="usb-0000:00:1d.2-1/input0"
UNIQ=""
EV=7
KEY=70000 0 0 0 0
REL=103
MODALIAS=input:b0003v046DpC03Ee0110-e0,1,2,k110,111,112,r0,1,8,amlsfw

udev also sends messages to syslog. The default syslog priority that controls which messages are sent to syslog is specified in the udev configuration file /etc/udev/udev.conf. The log priority of the running daemon can be changed with udevadm control --log_priority=LEVEL/NUMBER.

22.6 Influencing Kernel Device Event Handling with udev Rules

A udev rule can match any property the kernel adds to the event itself or any information that the kernel exports to sysfs. The rule can also request additional information from external programs. Every event is matched against all provided rules. All rules are located in the /etc/udev/rules.d directory.

Every line in the rules file contains at least one key value pair. There are two kinds of keys, match and assignment keys. If all match keys match their values, the rule is applied and the assignment keys are assigned the specified value. A matching rule may specify the name of the device node, add symbolic links pointing to the node or run a specified program as part of the event handling. If no matching rule is found, the default device node name is used to create the device node. Detailed information about the rule syntax and the provided keys to match or import data are described in the udev man page. The following example rules provide a basic introduction to udev rule syntax. The example rules are all taken from the udev default rule set that is located under /etc/udev/rules.d/50-udev-default.rules.

Example 22.1: Example udev Rules
# console
KERNEL=="console", MODE="0600", OPTIONS="last_rule"

# serial devices
KERNEL=="ttyUSB*", ATTRS{product}=="[Pp]alm*Handheld*", SYMLINK+="pilot"

# printer
SUBSYSTEM=="usb", KERNEL=="lp*", NAME="usb/%k", SYMLINK+="usb%k", GROUP="lp"

# kernel firmware loader
SUBSYSTEM=="firmware", ACTION=="add", RUN+="firmware.sh"

The console rule consists of three keys: one match key (KERNEL) and two assign keys (MODE, OPTIONS). The KERNEL match rule searches the device list for any items of the type console. Only exact matches are valid and trigger this rule to be executed. The MODE key assigns special permissions to the device node, in this case, read and write permissions to the owner of this device only. The OPTIONS key makes this rule the last rule to be applied to any device of this type. Any later rule matching this particular device type does not have any effect.

The serial devices rule is not available in 50-udev-default.rules anymore, but it is still worth considering. It consists of two match keys (KERNEL and ATTRS) and one assign key (SYMLINK). The KERNEL key searches for all devices of the ttyUSB type. Using the * wild card, this key matches several of these devices. The second match key, ATTRS, checks whether the product attribute file in sysfs for any ttyUSB device contains a certain string. The assign key (SYMLINK) triggers the addition of a symbolic link to this device under /dev/pilot. The operator used in this key (+=) tells udev to additionally perform this action, even if previous or later rules add other symbolic links. As this rule contains two match keys, it is only applied if both conditions are met.

The printer rule deals with USB printers and contains two match keys which must both apply to get the entire rule applied (SUBSYSTEM and KERNEL). Three assign keys deal with the naming for this device type (NAME), the creation of symbolic device links (SYMLINK) and the group membership for this device type (GROUP). Using the * wild card in the KERNEL key makes it match several lp printer devices. Substitutions are used in both, the NAME and the SYMLINK keys to extend these strings by the internal device name. For example, the symbolic link to the first lp USB printer would read /dev/usblp0.

The kernel firmware loader rule makes udev load additional firmware by an external helper script during runtime. The SUBSYSTEM match key searches for the firmware subsystem. The ACTION key checks whether any device belonging to the firmware subsystem has been added. The RUN+= key triggers the execution of the firmware.sh script to locate the firmware that is to be loaded.

Some general characteristics are common to all rules:

  • Each rule consists of one or more key value pairs separated by a comma.

  • A key's operation is determined by the operator. udev rules support several operators.

  • Each given value must be enclosed by quotation marks.

  • Each line of the rules file represents one rule. If a rule is longer than one line, use \ to join the different lines as you would do in shell syntax.

  • udev rules support a shell-style pattern that matches the *, ?, and [] patterns.

  • udev rules support substitutions.

22.6.1 Using Operators in udev Rules

Creating keys you can choose from several operators, depending on the type of key you want to create. Match keys will normally be used to find a value that either matches or explicitly mismatches the search value. Match keys contain either of the following operators:

==

Compare for equality. If the key contains a search pattern, all results matching this pattern are valid.

!=

Compare for non-equality. If the key contains a search pattern, all results matching this pattern are valid.

Any of the following operators can be used with assign keys:

=

Assign a value to a key. If the key previously consisted of a list of values, the key resets and only the single value is assigned.

+=

Add a value to a key that contains a list of entries.

:=

Assign a final value. Disallow any later change by later rules.

22.6.2 Using Substitutions in udev Rules

udev rules support the use of placeholders and substitutions. Use them in a similar fashion as you would do in any other scripts. The following substitutions can be used with udev rules:

%r, $root

The device directory, /dev by default.

%p, $devpath

The value of DEVPATH.

%k, $kernel

The value of KERNEL or the internal device name.

%n, $number

The device number.

%N, $tempnode

The temporary name of the device file.

%M, $major

The major number of the device.

%m, $minor

The minor number of the device.

%s{ATTRIBUTE}, $attr{ATTRIBUTE}

The value of a sysfs attribute (specified by ATTRIBUTE).

%E{VARIABLE}, $env{VARIABLE}

The value of an environment variable (specified by VARIABLE).

%c, $result

The output of PROGRAM.

%%

The % character.

$$

The $ character.

22.6.3 Using udev Match Keys

Match keys describe conditions that must be met before a udev rule can be applied. The following match keys are available:

ACTION

The name of the event action, for example, add or remove when adding or removing a device.

DEVPATH

The device path of the event device, for example, DEVPATH=/bus/pci/drivers/ipw3945 to search for all events related to the ipw3945 driver.

KERNEL

The internal (kernel) name of the event device.

SUBSYSTEM

The subsystem of the event device, for example, SUBSYSTEM=usb for all events related to USB devices.

ATTR{FILENAME}

sysfs attributes of the event device. To match a string contained in the vendor attribute file name, you could use ATTR{vendor}=="On[sS]tream", for example.

KERNELS

Let udev search the device path upwards for a matching device name.

SUBSYSTEMS

Let udev search the device path upwards for a matching device subsystem name.

DRIVERS

Let udev search the device path upwards for a matching device driver name.

ATTRS{FILENAME}

Let udev search the device path upwards for a device with matching sysfs attribute values.

ENV{KEY}

The value of an environment variable, for example, ENV{ID_BUS}="ieee1394 to search for all events related to the FireWire bus ID.

PROGRAM

Let udev execute an external program. To be successful, the program must return with exit code zero. The program's output, printed to STDOUT, is available to the RESULT key.

RESULT

Match the output string of the last PROGRAM call. Either include this key in the same rule as the PROGRAM key or in a later one.

22.6.4 Using udev Assign Keys

In contrast to the match keys described above, assign keys do not describe conditions that must be met. They assign values, names and actions to the device nodes maintained by udev.

NAME

The name of the device node to be created. After a rule has set a node name, all other rules with a NAME key for this node are ignored.

SYMLINK

The name of a symbolic link related to the node to be created. Multiple matching rules can add symbolic links to be created with the device node. You can also specify multiple symbolic links for one node in one rule using the space character to separate the symbolic link names.

OWNER, GROUP, MODE

The permissions for the new device node. Values specified here overwrite anything that has been compiled in.

ATTR{KEY}

Specify a value to be written to a sysfs attribute of the event device. If the == operator is used, this key is also used to match against the value of a sysfs attribute.

ENV{KEY}

Tell udev to export a variable to the environment. If the == operator is used, this key is also used to match against an environment variable.

RUN

Tell udev to add a program to the list of programs to be executed for this device. Keep in mind to restrict this to very short tasks to avoid blocking further events for this device.

LABEL

Add a label where a GOTO can jump to.

GOTO

Tell udev to skip several rules and continue with the one that carries the label referenced by the GOTO key.

IMPORT{TYPE}

Load variables into the event environment such as the output of an external program. udev imports variables of several types. If no type is specified, udev tries to determine the type itself based on the executable bit of the file permissions.

  • program tells udev to execute an external program and import its output.

  • file tells udev to import a text file.

  • parent tells udev to import the stored keys from the parent device.

WAIT_FOR_SYSFS

Tells udev to wait for the specified sysfs file to be created for a certain device. For example, WAIT_FOR_SYSFS="ioerr_cnt" informs udev to wait until the ioerr_cnt file has been created.

OPTIONS

The OPTION key may have several values:

  • last_rule tells udev to ignore all later rules.

  • ignore_device tells udev to ignore this event completely.

  • ignore_remove tells udev to ignore all later remove events for the device.

  • all_partitions tells udev to create device nodes for all available partitions on a block device.

22.7 Persistent Device Naming

The dynamic device directory and the udev rules infrastructure make it possible to provide stable names for all disk devices—regardless of their order of recognition or the connection used for the device. Every appropriate block device the kernel creates is examined by tools with special knowledge about certain buses, drive types or file systems. Along with the dynamic kernel-provided device node name, udev maintains classes of persistent symbolic links pointing to the device:

/dev/disk
|-- by-id
|   |-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B -> ../../sda
|   |-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B-part1 -> ../../sda1
|   |-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B-part6 -> ../../sda6
|   |-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B-part7 -> ../../sda7
|   |-- usb-Generic_STORAGE_DEVICE_02773 -> ../../sdd
|   `-- usb-Generic_STORAGE_DEVICE_02773-part1 -> ../../sdd1
|-- by-label
|   |-- Photos -> ../../sdd1
|   |-- SUSE10 -> ../../sda7
|   `-- devel -> ../../sda6
|-- by-path
|   |-- pci-0000:00:1f.2-scsi-0:0:0:0 -> ../../sda
|   |-- pci-0000:00:1f.2-scsi-0:0:0:0-part1 -> ../../sda1
|   |-- pci-0000:00:1f.2-scsi-0:0:0:0-part6 -> ../../sda6
|   |-- pci-0000:00:1f.2-scsi-0:0:0:0-part7 -> ../../sda7
|   |-- pci-0000:00:1f.2-scsi-1:0:0:0 -> ../../sr0
|   |-- usb-02773:0:0:2 -> ../../sdd
|   |-- usb-02773:0:0:2-part1 -> ../../sdd1
`-- by-uuid
    |-- 159a47a4-e6e6-40be-a757-a629991479ae -> ../../sda7
    |-- 3e999973-00c9-4917-9442-b7633bd95b9e -> ../../sda6
    `-- 4210-8F8C -> ../../sdd1

22.8 Files used by udev

/sys/*

Virtual file system provided by the Linux kernel, exporting all currently known devices. This information is used by udev to create device nodes in /dev

/dev/*

Dynamically created device nodes and static content created with systemd-tmpfiles; for more information, see the systemd-tmpfiles(8) man page.

The following files and directories contain the crucial elements of the udev infrastructure:

/etc/udev/udev.conf

Main udev configuration file.

/etc/udev/rules.d/*

udev event matching rules.

/usr/lib/tmpfiles.d/ and /etc/tmpfiles.d/

Responsible for static /dev content.

/usr/lib/udev/*

Helper programs called from udev rules.

22.9 For More Information

For more information about the udev infrastructure, refer to the following man pages:

udev

General information about udev, keys, rules and other important configuration issues.

udevadm

udevadm can be used to control the runtime behavior of udev, request kernel events, manage the event queue and provide simple debugging mechanisms.

udevd

Information about the udev event managing daemon.

23 Live Patching the Linux Kernel Using kGraft

  • Filename: kgraft.xml
  • ID: cha.kgraft
Abstract

This document describes the basic principles of the kGraft live patching technology and provides usage guidelines for the SLE Live Patching service.

kGraft is a live patching technology for runtime patching of the Linux kernel, without stopping the kernel. This maximizes system uptime, and thus system availability, which is important for mission-critical systems. By allowing dynamic patching of the kernel, the technology also encourages users to install critical security updates without deferring them to a scheduled downtime.

A kGraft patch is a kernel module, intended for replacing whole functions in the kernel. kGraft primarily offers in-kernel infrastructure for integration of the patched code with base kernel code at runtime.

SLE Live Patching is a service provided on top of regular SUSE Linux Enterprise Server maintenance. kGraft patches distributed through SLE Live Patching supplement regular SLES maintenance updates. Common update stack and procedures can be used for SLE Live Patching deployment.

The information provided in this document related to the AMD64/Intel 64 and POWER architectures. In case you use a different architecture, the procedures may differ.

23.1 Advantages of kGraft

Live kernel patching using kGraft is especially useful for quick response in emergencies (when serious vulnerabilities are known and should be fixed when possible or there are serious system stability issues with a known fix). It is not used for scheduled updates where time is not critical.

Typical use cases for kGraft include systems like memory databases with huge amounts of RAM, where boot-up times of 15 minutes or more are not uncommon, large simulations that need weeks or months without a restart, or infrastructure building blocks providing continuous service to a many consumers.

The main advantage of kGraft is that it never requires stopping the kernel, not even for a short time period.

A kGraft patch is a .ko kernel module in a RPM package. It is inserted into the kernel using the insmod command when the package is installed or updated. kGraft replaces whole functions in the kernel, even if they are being executed. An updated kGraft module can replace an existing patch if necessary.

kGraft is also lean—it contains only a small amount of code, because it leverages other standard Linux technologies.

23.2 Low-level Function of kGraft

kGraft uses the ftrace infrastructure to perform patching. The following describes the implementation on the AMD64/Intel 64 architecture.

To patch a kernel function, kGraft needs some space at the start of the function to insert a jump to a new function. This space is allocated during kernel compilation by GCC with function profiling turned on. In particular, a 5-byte call instruction is injected to the start of kernel functions. When such instrumented kernel is booting, profiling calls are replaced by 5-byte NOP (no operation) instructions.

After patching starts, the first byte is replaced by the INT3 (breakpoint) instruction. This ensures atomicity of the 5-byte instruction replacement. The other four bytes are replaced by the address to the new function. Finally, the first byte is replaced by the JMP (long jump) opcode.

Inter-processor non-maskable interrupts (IPI NMI) are used throughout the process to flush speculative decoding queues of other CPUs in the system. This allows switching to the new function without ever stopping the kernel, not even for a very short moment. The interruptions by IPI NMIs can be measured in microseconds and are not considered service interruptions as they happen while the kernel is running in any case.

Callers are never patched. Instead, the callee's NOPs are replaced by a JMP to the new function. JMP instructions remain forever. This takes care of function pointers, including in structures, and does not require saving any old data for the possibility of un-patching.

However, these steps alone would not be good enough: since the functions would be replaced non-atomically, a new fixed function in one part of the kernel could still be calling an old function elsewhere or vice versa. If the semantics of the function interfaces changed in the patch, chaos would ensue.

Thus, until all functions are replaced, kGraft uses an approach based on trampolines and similar to RCU (read-copy-update), to ensure a consistent view of the world to each user space thread, kernel thread and kernel interrupt. A per-thread flag is set on each kernel entry and exit. This way, an old function would always call another old function and a new function always a new one. Once all processes have the "new universe" flag set, patching is complete, trampolines can be removed and the code can operate at full speed without performance impact other than an extra-long jump for each patched function.

23.3 Installing kGraft Patches

This section describes the activation of the SUSE Linux Enterprise Live Patching extension and the installation of kGraft patches.

23.3.1 Activation of SLE Live Patching

To activate SLE Live Patching on your system, follow these steps:

  1. If your SLES system is not yet registered, register it. Registration can be done during the system installation or later using the YaST Product Registration module (yast2 registration). After registration, click Yes to see the list of available online updates.

    If your SLES system is already registered, but SLE Live Patching is not yet activated, open the YaST Product Registration module (yast2 registration) and click Select Extensions.

  2. Select SUSE Linux Enterprise Live Patching 12 in the list of available extensions and click Next.

  3. Confirm the license terms and click Next.

  4. Enter the SLE Live Patching registration code and click Next.

  5. Check the Installation Summary and selected Patterns. The pattern Live Patching should be selected for installation.

  6. Click Accept to complete the installation. This will install the base kGraft components on your system together with the initial live patch.

23.3.2 Updating System

  1. SLE Live Patching updates are distributed in a form that allows using standard SLE update stack for patch application. The initial live patch can be updated using zypper patch, YaST Online Update or equivalent method.

  2. The kernel is patched automatically during the package installation. However, invocations of the old kernel functions are not completely eliminated until all sleeping processes wake up and get out of the way. This can take a considerable amount of time. Despite this, sleeping processes that use the old kernel functions are not considered a security issue. Nevertheless, in the current version of kGraft, it is not possible to apply another kGraft patch until all processes cross the kernel-user space boundary to stop using patched functions from the previous patch.

    To see the global status of patching, check the flag in /sys/kernel/kgraft/in_progress. The value 1 signifies the existence of sleeping processes that still need to be woken (the patching is still in progress). The value 0 signifies that all processes are using solely the patched functions and patching has finished already. Alternatively, use the kgr status command to obtain the same information.

    The flag can be checked on a per-process basis too. Check the number in /proc/PROCESS_NUMBER/kgr_in_progress for each process individually. Again, the value 1 signifies sleeping process that still needs to be woken. Alternatively, use the kgr blocking command to output the list of sleeping processes.

23.4 Patch Lifecycle

Expiration dates of live patches can be accessed with zypper lifecycle. Make sure that the package lifecycle-data-sle-live-patching is installed.

tux > zypper lifecycle

Product end of support
Codestream: SUSE Linux Enterprise Server 12             2024-10-31
SUSE Linux Enterprise Server 12 SP2                     n/a*

Extension end of support
SUSE Linux Enterprise Live Patching                     2017-10-31

Package end of support if different from product:
SUSEConnect                              Now, installed 0.2.41-18.1, update available 0.2.42-19.3.1
apache2-utils                            Now


*) See https://www.suse.com/lifecycle  for latest information

When the expiration date of a patch is reached, no further live patches for this kernel version will be supplied. Plan an update of your kernel before the end of the live patch lifecycle period.

23.5 Removing a kGraft Patch

To remove a kGraft patch, use the following procedure:

  1. First remove the patch itself using Zypper:

    zypper rm kgraft-patch-3_12_32-25-default
  2. Then reboot the machine.

23.6 Stuck Kernel Execution Threads

Kernel threads need to be prepared to handle kGraft. Third-party software may not be ready for kGraft adoption and its kernel modules may spawn kernel execution threads. These threads will block the patching process indefinitely. As an emergency measure kGraft offers the possibility to force finishing of the patching process without waiting for all execution threads to cross the safety checkpoint. This can be achieved by writing 0 into /sys/kernel/kgraft/in_progress. Consult SUSE Support before performing this procedure.

23.7 The kgr Tool

Several kGraft management tasks can be simplified with the kgr tool. The available commands are:

kgr status

Displays the overall status of kGraft patching (ready or in_progress).

kgr patches

Displays the list of loaded kGraft patches.

kgr blocking

Lists processes that are preventing kGraft patching from finishing. By default only the PIDs are listed. Specifying -v prints command lines if available. Another -v displays also stack traces.

For detailed information, see man kgr.

23.8 Scope of kGraft Technology

kGraft is based on replacing functions. Data structure alteration can be accomplished only indirectly with kGraft. As a result, changes to kernel data structure require special care and, if the change is too large, rebooting might be required. kGraft also might not be able to handle situations where one compiler is used to compile the old kernel and another compiler is used for compiling the patch.

Because of the way kGraft works, support for third-party modules that are spawning kernel threads is limited.

23.9 Scope of SLE Live Patching

Fixes for SUSE Common Vulnerability Scoring System (CVSS) level 7+ vulnerabilities and bug fixes related to system stability or data corruption will be shipped in the scope of SLE Live Patching. It might not be possible to produce a live patch for all kinds of fixes fulfilling the above criteria. SUSE reserves the right to skip fixes where production of a kernel live patch is unviable because of technical reasons. For more information on CVSS 3.0, which is the base for the SUSE CVSS rating, see https://www.first.org/cvss/.

23.10 Interaction with the Support Processes

While resolving a technical difficulty with SUSE Support, you may receive a so-called Program Temporary Fix (PTF). PTFs may be issued for various packages including those forming the base of SLE Live Patching.

kGraft PTFs complying with the conditions described in the previous section can be installed as usual and SUSE will ensure that the system in question does not need to be rebooted and that future live updates are applied cleanly.

PTFs issued for the base kernel disrupt the live patching process. First, installing the PTF kernel means a reboot as the kernel cannot be replaced as a whole at runtime. Second, another reboot is needed to replace the PTF with any regular maintenance updates for which the live patches are issued.

PTFs for other packages in SLE Live Patching can be treated like regular PTFs with the usual guarantees.

24 Special System Features

  • Filename: suse_linux.xml
  • ID: cha.suse
Abstract

This chapter starts with information about various software packages, the virtual consoles and the keyboard layout. We talk about software components like bash, cron and logrotate, because they were changed or enhanced during the last release cycles. Even if they are small or considered of minor importance, users should change their default behavior, because these components are often closely coupled with the system. The chapter concludes with a section about language and country-specific settings (I18N and L10N).

24.1 Information about Special Software Packages

  • Filename: suse_sw_packages.xml
  • ID: sec.suse.packages

The programs bash, cron, logrotate, locate, ulimit and free are very important for system administrators and many users. Man pages and info pages are two useful sources of information about commands, but both are not always available. GNU Emacs is a popular and very configurable text editor.

24.1.1 The bash Package and /etc/profile

Bash is the default system shell. When used as a login shell, it reads several initialization files. Bash processes them in the order they appear in this list:

  1. /etc/profile

  2. ~/.profile

  3. /etc/bash.bashrc

  4. ~/.bashrc

Make custom settings in ~/.profile or ~/.bashrc. To ensure the correct processing of these files, it is necessary to copy the basic settings from /etc/skel/.profile or /etc/skel/.bashrc into the home directory of the user. It is recommended to copy the settings from /etc/skel after an update. Execute the following shell commands to prevent the loss of personal adjustments:

mv ~/.bashrc ~/.bashrc.old
cp /etc/skel/.bashrc ~/.bashrc
mv ~/.profile ~/.profile.old
cp /etc/skel/.profile ~/.profile

Then copy personal adjustments back from the *.old files.

24.1.2 The cron Package

Use cron to run commands automatically in the background at predefined time. cron uses specially formatted time tables, and the tool comes with several default ones. Users can also specify custom tables, if needed.

The cron tables are located in /var/spool/cron/tabs. /etc/crontab serves as a systemwide cron table. Enter the user name to run the command directly after the time table and before the command. In Example 24.1, “Entry in /etc/crontab”, root is entered. Package-specific tables, located in /etc/cron.d, have the same format. See the cron man page (man cron).

Example 24.1: Entry in /etc/crontab
1-59/5 * * * *   root   test -x /usr/sbin/atrun && /usr/sbin/atrun

You cannot edit /etc/crontab by calling the command crontab -e. This file must be loaded directly into an editor, then modified and saved.

A number of packages install shell scripts to the directories /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly and /etc/cron.monthly, whose execution is controlled by /usr/lib/cron/run-crons. /usr/lib/cron/run-crons is run every 15 minutes from the main table (/etc/crontab). This guarantees that processes that may have been neglected can be run at the proper time.

To run the hourly, daily or other periodic maintenance scripts at custom times, remove the time stamp files regularly using /etc/crontab entries (see Example 24.2, “/etc/crontab: Remove Time Stamp Files”, which removes the hourly one before every full hour, the daily one once a day at 2:14 a.m., etc.).

Example 24.2: /etc/crontab: Remove Time Stamp Files
59 *  * * *     root  rm -f /var/spool/cron/lastrun/cron.hourly
14 2  * * *     root  rm -f /var/spool/cron/lastrun/cron.daily
29 2  * * 6     root  rm -f /var/spool/cron/lastrun/cron.weekly
44 2  1 * *     root  rm -f /var/spool/cron/lastrun/cron.monthly

Or you can set DAILY_TIME in /etc/sysconfig/cron to the time at which cron.daily should start. The setting of MAX_NOT_RUN ensures that the daily tasks get triggered to run, even if the user did not turn on the computer at the specified DAILY_TIME for a longer time. The maximum value of MAX_NOT_RUN is 14 days.

The daily system maintenance jobs are distributed to various scripts for reasons of clarity. They are contained in the package aaa_base. /etc/cron.daily contains, for example, the components suse.de-backup-rpmdb, suse.de-clean-tmp or suse.de-cron-local.

24.1.3 Stopping Cron Status Messages

To avoid the mail-flood caused by cron status messages, the default value of SEND_MAIL_ON_NO_ERROR in /etc/sysconfig/cron is set to "no" for new installations. Even with this setting to "no", cron data output will still be sent to the MAILTO address, as documented in the cron man page.

In the update case it is recommended to set these values according to your needs.

24.1.4 Log Files: Package logrotate

  • Filename: suse_logfiles.xml
  • ID: sec.suse.log

There are several system services (daemons) that, along with the kernel itself, regularly record the system status and specific events onto log files. This way, the administrator can regularly check the status of the system at a certain point in time, recognize errors or faulty functions and troubleshoot them with pinpoint precision. These log files are normally stored in /var/log as specified by FHS and grow on a daily basis. The logrotate package helps control the growth of these files. For more details refer to Section 3.3, “Managing Log Files with logrotate.

24.1.5 The locate Command

locate, a command for quickly finding files, is not included in the standard scope of installed software. If desired, install the package mlocate, the successor of the package findutils-locate. The updatedb process is started automatically every night or about 15 minutes after booting the system.

24.1.6 The ulimit Command

With the ulimit (user limits) command, it is possible to set limits for the use of system resources and to have these displayed. ulimit is especially useful for limiting available memory for applications. With this, an application can be prevented from co-opting too much of the system resources and slowing or even hanging up the operating system.

ulimit can be used with various options. To limit memory usage, use the options listed in Table 24.1, “ulimit: Setting Resources for the User”.

Table 24.1: ulimit: Setting Resources for the User

-m

The maximum resident set size

-v

The maximum amount of virtual memory available to the shell

-s

The maximum size of the stack

-c

The maximum size of core files created

-a

All current limits are reported

Systemwide default entries are set in /etc/profile. Editing this file directly is not recommended, because changes will be overwritten during system upgrades. To customize systemwide profile settings, use /etc/profile.local. Per-user settings should be made in ~USER/.bashrc.

Example 24.3: ulimit: Settings in ~/.bashrc
# Limits maximum resident set size (physical memory):
ulimit -m 98304

# Limits of virtual memory:
ulimit -v 98304

Memory allocations must be specified in KB. For more detailed information, see man bash.

Important
Important: ulimit Support

Not all shells support ulimit directives. PAM (for example, pam_limits) offers comprehensive adjustment possibilities as an alternative to ulimit.

24.1.7 The free Command

The free command displays the total amount of free and used physical memory as well as swap space in the system and the buffers and cache consumed by the kernel. The concept of available RAM dates back to before the days of unified memory management. The slogan free memory is bad memory applies well to Linux. As a result, Linux has always made the effort to balance out caches without actually allowing free or unused memory.

The kernel does not have direct knowledge of any applications or user data. Instead, it manages applications and user data in a page cache. If memory runs short, parts of it are written to the swap partition or to files, from which they can initially be read using the mmap command (see man mmap).

The kernel also contains other caches, such as the slab cache, where the caches used for network access are stored. This may explain the differences between the counters in /proc/meminfo. Most, but not all, of them can be accessed via /proc/slabinfo.

However, if your goal is to find out how much RAM is currently being used, find this information in /proc/meminfo.

24.1.8 Man Pages and Info Pages

For some GNU applications (such as tar), the man pages are no longer maintained. For these commands, use the --help option to get a quick overview of the info pages, which provide more in-depth instructions. Info is GNU's hypertext system. Read an introduction to this system by entering info info. Info pages can be viewed with Emacs by entering emacs -f info or directly in a console with info. You can also use tkinfo, xinfo or the help system to view info pages.

24.1.9 Selecting Man Pages Using the man Command

To read a man page enter man MAN_PAGE. If a man page with the same name exists in different sections, they will all be listed with the corresponding section numbers. Select the one to display. If you do not enter a section number within a few seconds, the first man page will be displayed.

To change this to the default system behavior, set MAN_POSIXLY_CORRECT=1 in a shell initialization file such as ~/.bashrc.

24.1.10 Settings for GNU Emacs

  • Filename: suse_emacs.xml
  • ID: sec.suse.emacs

GNU Emacs is a complex work environment. The following sections cover the configuration files processed when GNU Emacs is started. More information is available at http://www.gnu.org/software/emacs/.

On start-up, Emacs reads several files containing the settings of the user, system administrator and distributor for customization or preconfiguration. The initialization file ~/.emacs is installed to the home directories of the individual users from /etc/skel. .emacs, in turn, reads the file /etc/skel/.gnu-emacs. To customize the program, copy .gnu-emacs to the home directory (with cp /etc/skel/.gnu-emacs ~/.gnu-emacs) and make the desired settings there.

.gnu-emacs defines the file ~/.gnu-emacs-custom as custom-file. If users make settings with the customize options in Emacs, the settings are saved to ~/.gnu-emacs-custom.

With SUSE Linux Enterprise Desktop, the emacs package installs the file site-start.el in the directory /usr/share/emacs/site-lisp. The file site-start.el is loaded before the initialization file ~/.emacs. Among other things, site-start.el ensures that special configuration files distributed with Emacs add-on packages, such as psgml, are loaded automatically. Configuration files of this type are located in /usr/share/emacs/site-lisp, too, and always begin with suse-start-. The local system administrator can specify systemwide settings in default.el.

More information about these files is available in the Emacs info file under Init File: info:/emacs/InitFile. Information about how to disable the loading of these files (if necessary) is also provided at this location.

The components of Emacs are divided into several packages:

  • The base package emacs.

  • emacs-x11 (usually installed): the program with X11 support.

  • emacs-nox: the program without X11 support.

  • emacs-info: online documentation in info format.

  • emacs-el: the uncompiled library files in Emacs Lisp. These are not required at runtime.

  • Numerous add-on packages can be installed if needed: emacs-auctex (LaTeX), psgml (SGML and XML), gnuserv (client and server operation) and others.

24.2 Virtual Consoles

  • Filename: suse_vc.xml
  • ID: sec.suse.virt.konsolen

Linux is a multiuser and multitasking system. The advantages of these features can be appreciated even on a stand-alone PC system. In text mode, there are six virtual consoles available. Switch between them using AltF1 through AltF6. The seventh console is reserved for X and the tenth console shows kernel messages.

To switch to a console from X without shutting it down, use CtrlAltF1 to CtrlAltF6. To return to X, press AltF7.

24.3 Keyboard Mapping

  • Filename: suse_kb.xml
  • ID: sec.suse.kb

To standardize the keyboard mapping of programs, changes were made to the following files:

/etc/inputrc
/etc/X11/Xmodmap
/etc/skel/.emacs
/etc/skel/.gnu-emacs
/etc/skel/.vimrc
/etc/csh.cshrc
/etc/termcap
/usr/share/terminfo/x/xterm
/usr/share/X11/app-defaults/XTerm
/usr/share/emacs/VERSION/site-lisp/term/*.el

These changes only affect applications that use terminfo entries or whose configuration files are changed directly (vi, emacs, etc.). Applications not shipped with the system should be adapted to these defaults.

Under X, the compose key (multikey) can be enabled as explained in /etc/X11/Xmodmap.

Further settings are possible using the X Keyboard Extension (XKB). This extension is also used by the desktop environment GNOME (gswitchit).

Tip
Tip: For More Information

Information about XKB is available in the documents listed in /usr/share/doc/packages/xkeyboard-config (part of the xkeyboard-config package).

24.4 Language and Country-Specific Settings

  • Filename: suse_l10n.xml
  • ID: sec.suse.l10n

The system is, to a very large extent, internationalized and can be modified for local needs. Internationalization (I18N) allows specific localization (L10N). The abbreviations I18N and L10N are derived from the first and last letters of the words and, in between, the number of letters omitted.

Settings are made with LC_ variables defined in the file /etc/sysconfig/language. This refers not only to native language support, but also to the categories Messages (Language), Character Set, Sort Order, Time and Date, Numbers and Money. Each of these categories can be defined directly with its own variable or indirectly with a master variable in the file language (see the locale man page).

RC_LC_MESSAGES, RC_LC_CTYPE, RC_LC_COLLATE, RC_LC_TIME, RC_LC_NUMERIC, RC_LC_MONETARY

These variables are passed to the shell without the RC_ prefix and represent the listed categories. The shell profiles concerned are listed below. The current setting can be shown with the command locale.

RC_LC_ALL

This variable, if set, overwrites the values of the variables already mentioned.

RC_LANG

If none of the previous variables are set, this is the fallback. By default, only RC_LANG is set. This makes it easier for users to enter their own values.

ROOT_USES_LANG

A yes or no variable. If set to no, root always works in the POSIX environment.

The variables can be set with the YaST sysconfig editor. The value of such a variable contains the language code, country code, encoding and modifier. The individual components are connected by special characters:

  LANG=<language>[[_<COUNTRY>].<Encoding>[@<Modifier>]]

24.4.1 Some Examples

You should always set the language and country codes together. Language settings follow the standard ISO 639 available at http://www.evertype.com/standards/iso639/iso639-en.html and http://www.loc.gov/standards/iso639-2/. Country codes are listed in ISO 3166, see http://en.wikipedia.org/wiki/ISO_3166.

It only makes sense to set values for which usable description files can be found in /usr/lib/locale. Additional description files can be created from the files in /usr/share/i18n using the command localedef. The description files are part of the glibc-i18ndata package. A description file for en_US.UTF-8 (for English and United States) can be created with:

localedef -i en_US -f UTF-8 en_US.UTF-8
LANG=en_US.UTF-8

This is the default setting if American English is selected during installation. If you selected another language, that language is enabled but still with UTF-8 as the character encoding.

LANG=en_US.ISO-8859-1

This sets the language to English, country to United States and the character set to ISO-8859-1. This character set does not support the Euro sign, but it can be useful sometimes for programs that have not been updated to support UTF-8. The string defining the charset (ISO-8859-1 in this case) is then evaluated by programs like Emacs.

LANG=en_IE@euro

The above example explicitly includes the Euro sign in a language setting. This setting is obsolete now, as UTF-8 also covers the Euro symbol. It is only useful if an application supports ISO-8859-15 and not UTF-8.

Changes to /etc/sysconfig/language are activated by the following process chain:

  • For the Bash: /etc/profile reads /etc/profile.d/lang.sh which, in turn, analyzes /etc/sysconfig/language.

  • For tcsh: At login, /etc/csh.login reads /etc/profile.d/lang.csh which, in turn, analyzes /etc/sysconfig/language.

This ensures that any changes to /etc/sysconfig/language are available at the next login to the respective shell, without having to manually activate them.

Users can override the system defaults by editing their ~/.bashrc accordingly. For example, if you do not want to use the system-wide en_US for program messages, include LC_MESSAGES=es_ES so that messages are displayed in Spanish instead.

24.4.2 Locale Settings in ~/.i18n

If you are not satisfied with locale system defaults, change the settings in ~/.i18n according to the Bash scripting syntax. Entries in ~/.i18n override system defaults from /etc/sysconfig/language. Use the same variable names but without the RC_ name space prefixes. For example, use LANG instead of RC_LANG:

LANG=cs_CZ.UTF-8
LC_COLLATE=C

24.4.3 Settings for Language Support

Files in the category Messages are, as a rule, only stored in the corresponding language directory (like en) to have a fallback. If you set LANG to en_US and the message file in /usr/share/locale/en_US/LC_MESSAGES does not exist, it falls back to /usr/share/locale/en/LC_MESSAGES.

A fallback chain can also be defined, for example, for Breton to French or for Galician to Spanish to Portuguese:

LANGUAGE="br_FR:fr_FR"

LANGUAGE="gl_ES:es_ES:pt_PT"

If desired, use the Norwegian variants Nynorsk and Bokmål instead (with additional fallback to no):

LANG="nn_NO"

LANGUAGE="nn_NO:nb_NO:no"

or

LANG="nb_NO"

LANGUAGE="nb_NO:nn_NO:no"

Note that in Norwegian, LC_TIME is also treated differently.

One problem that can arise is a separator used to delimit groups of digits not being recognized properly. This occurs if LANG is set to only a two-letter language code like de, but the definition file glibc uses is located in /usr/share/lib/de_DE/LC_NUMERIC. Thus LC_NUMERIC must be set to de_DE to make the separator definition visible to the system.

24.4.4 For More Information

Part IV Services

25 Time Synchronization with NTP

The NTP (network time protocol) mechanism is a protocol for synchronizing the system time over the network. First, a machine can obtain the time from a server that is a reliable time source. Second, a machine can itself act as a time source for other computers in the network. The goal is twofold—maintaining the absolute time and synchronizing the system time of all machines within a network.

26 Sharing File Systems with NFS

Distributing and sharing file systems over a network is a common task in corporate environments. The well-proven network file system (NFS) works with NIS, the yellow pages protocol. For a more secure protocol that works with LDAP and Kerberos, check NFSv4 (default). Combined with pNFS, you can eliminate performance bottlenecks.

NFS with NIS makes a network transparent to the user. With NFS, it is possible to distribute arbitrary file systems over the network. With an appropriate setup, users always find themselves in the same environment regardless of the terminal they currently use.

27 Samba

Using Samba, a Unix machine can be configured as a file and print server for macOS, Windows, and OS/2 machines. Samba has developed into a fully-fledged and rather complex product. Configure Samba with YaST, or by editing the configuration file manually.

28 On-Demand Mounting with Autofs

autofs is a program that automatically mounts specified directories on an on-demand basis. It is based on a kernel module for high efficiency, and can manage both local directories and network shares. These automatic mount points are mounted only when they are accessed, and unmounted after a certain period of inactivity. This on-demand behavior saves bandwidth and results in better performance than static mounts managed by /etc/fstab. While autofs is a control script, automount is the command (daemon) that does the actual auto-mounting.

25 Time Synchronization with NTP

  • Filename: net_xntp.xml
  • ID: cha.netz.xntp
Abstract

The NTP (network time protocol) mechanism is a protocol for synchronizing the system time over the network. First, a machine can obtain the time from a server that is a reliable time source. Second, a machine can itself act as a time source for other computers in the network. The goal is twofold—maintaining the absolute time and synchronizing the system time of all machines within a network.

Maintaining an exact system time is important in many situations. The built-in hardware clock does often not meet the requirements of applications such as databases or clusters. Manual correction of the system time would lead to severe problems because, for example, a backward leap can cause malfunction of critical applications. Within a network, it is usually necessary to synchronize the system time of all machines, but manual time adjustment is a bad approach. NTP provides a mechanism to solve these problems. The NTP service continuously adjusts the system time with reliable time servers in the network. It further enables the management of local reference clocks, such as radio-controlled clocks.

Note
Note

To enable time synchronization by means of active directory, follow the instructions found at Procedure 7.2, “ Joining an Active Directory Domain Using Windows Domain Membership.

25.1 Configuring an NTP Client with YaST

The NTP daemon (ntpd) coming with the ntp package is preset to use the local computer clock as a time reference. Using the hardware clock, however, only serves as a fallback for cases where no time source of better precision is available. YaST simplifies the configuration of an NTP client.

25.1.1 Basic Configuration

The YaST NTP client configuration (Network Services › NTP Configuration) consists of tabs. Set the start mode of ntpd and the server to query on the General Settings tab.

Only Manually

Select Only Manually, if you want to manually start the ntpd daemon.

Synchronize without Daemon

Select Synchronize without Daemon to set the system time periodically without a permanently running ntpd. You can set the Interval of the Synchronization in Minutes.

Now and On Boot

Select Now and On Boot to start ntpd automatically when the system is booted. This setting is recommended.

25.1.2 Changing Basic Configuration

The servers and other time sources for the client to query are listed in the lower part of the General Settings tab. Modify this list as needed with Add, Edit, and Delete. Display Log provides the possibility to view the log files of your client.

Click Add to add a new source of time information. In the following dialog, select the type of source with which the time synchronization should be made. The following options are available:

YaST: NTP Server
Figure 25.1: YaST: NTP Server
Server

In the drop-down Select list (see Figure 25.1, “YaST: NTP Server”), determine whether to set up time synchronization using a time server from your local network (Local NTP Server) or an Internet-based time server that takes care of your time zone (Public NTP Server). For a local time server, click Lookup to start an SLP query for available time servers in your network. Select the most suitable time server from the list of search results and exit the dialog with OK. For a public time server, select your country (time zone) and a suitable server from the list under Public NTP Server then exit the dialog with OK. In the main dialog, test the availability of the selected server with Test. Options allows you to specify additional options for ntpd.

Using Access Control Options, you can restrict the actions that the remote computer can perform with the daemon running on your computer. This field is enabled only after checking Restrict NTP Service to Configured Servers Only on the Security Settings tab (see Figure 25.2, “Advanced NTP Configuration: Security Settings”). The options correspond to the restrict clauses in /etc/ntp.conf. For example, nomodify notrap noquery disallows the server to modify NTP settings of your computer and to use the trap facility (a remote event logging feature) of your NTP daemon. Using these restrictions is recommended for servers out of your control (for example, on the Internet).

Refer to /usr/share/doc/packages/ntp-doc (part of the ntp-doc package) for detailed information.

Peer

A peer is a machine to which a symmetric relationship is established: it acts both as a time server and as a client. To use a peer in the same network instead of a server, enter the address of the system. The rest of the dialog is identical to the Server dialog.

Radio Clock

To use a radio clock in your system for the time synchronization, enter the clock type, unit number, device name, and other options in this dialog. Click Driver Calibration to fine-tune the driver. Detailed information about the operation of a local radio clock is available in /usr/share/doc/packages/ntp-doc/refclock.html.

Outgoing Broadcast

Time information and queries can also be transmitted by broadcast in the network. In this dialog, enter the address to which such broadcasts should be sent. Do not activate broadcasting unless you have a reliable time source like a radio controlled clock.

Incoming Broadcast

If you want your client to receive its information via broadcast, enter the address from which the respective packets should be accepted in this fields.

Advanced NTP Configuration: Security Settings
Figure 25.2: Advanced NTP Configuration: Security Settings

In the Security Settings tab (see Figure 25.2, “Advanced NTP Configuration: Security Settings”), determine whether ntpd should be started in a chroot jail. By default, Run NTP Daemon in Chroot Jail is not activated. The chroot jail option increases the security in the event of an attack over ntpd, as it prevents the attacker from compromising the entire system.

Restrict NTP Service to Configured Servers Only increases the security of your system by disallowing remote computers to view and modify NTP settings of your computer and to use the trap facility for remote event logging. After being enabled, these restrictions apply to all remote computers, unless you override the access control options for individual computers in the list of time sources in the General Settings tab. For all other remote computers, only querying for local time is allowed.

Enable Open Port in Firewall if SuSEFirewall2 is active (which it is by default). If you leave the port closed, it is not possible to establish a connection to the time server.

25.2 Manually Configuring NTP in the Network

The easiest way to use a time server in the network is to set server parameters. For example, if a time server called ntp.example.com is reachable from the network, add its name to the file /etc/ntp.conf by adding the following line:

server ntp.example.com

To add more time servers, insert additional lines with the keyword server. After initializing ntpd with the command systemctl start ntp, it takes about one hour until the time is stabilized and the drift file for correcting the local computer clock is created. With the drift file, the systematic error of the hardware clock can be computed when the computer is powered on. The correction is used immediately, resulting in a higher stability of the system time.

There are two possible ways to use the NTP mechanism as a client: First, the client can query the time from a known server in regular intervals. With many clients, this approach can cause a high load on the server. Second, the client can wait for NTP broadcasts sent out by broadcast time servers in the network. This approach has the disadvantage that the quality of the server is unknown and a server sending out wrong information can cause severe problems.

If the time is obtained via broadcast, you do not need the server name. In this case, enter the line broadcastclient in the configuration file /etc/ntp.conf. To use one or more known time servers exclusively, enter their names in the line starting with servers.

25.3 Dynamic Time Synchronization at Runtime

If the system boots without network connection, ntpd starts up, but it cannot resolve DNS names of the time servers set in the configuration file. This can happen if you use NetworkManager with an encrypted Wi-Fi.

If you want ntpd to resolve DNS names at runtime, you must set the dynamic option. When a network connection is established after booting, ntpd looks up the names again and can reach the time servers to get the time.

Manually edit /etc/ntp.conf and add dynamic to one or more server entries:

server ntp.example.com dynamic

Or use YaST and proceed as follows:

  1. In YaST click Network Services › NTP Configuration.

  2. Select the server you want to configure. Then click Edit.

  3. Activate the Options field and add dynamic. Separate it with a space, if there are already other options entered.

  4. Click Ok to close the edit dialog. Repeat the previous step to change all servers as wanted.

  5. Finally click Ok to save the settings.

25.4 Setting Up a Local Reference Clock

The software package ntpd contains drivers for connecting local reference clocks. A list of supported clocks is available in the ntp-doc package in the file /usr/share/doc/packages/ntp-doc/refclock.html. Every driver is associated with a number. In NTP, the actual configuration takes place by means of pseudo IP addresses. The clocks are entered in the file /etc/ntp.conf as though they existed in the network. For this purpose, they are assigned special IP addresses in the form 127.127.T.U. Here, T stands for the type of the clock and determines which driver is used and U for the unit, which determines the interface used.

Normally, the individual drivers have special parameters that describe configuration details. The file /usr/share/doc/packages/ntp-doc/drivers/driverNN.html (where NN is the number of the driver) provides information about the particular type of clock. For example, the type 8 clock (radio clock over serial interface) requires an additional mode that specifies the clock more precisely. The Conrad DCF77 receiver module, for example, has mode 5. To use this clock as a preferred reference, specify the keyword prefer. The complete server line for a Conrad DCF77 receiver module would be:

server 127.127.8.0 mode 5 prefer

Other clocks follow the same pattern. Following the installation of the ntp-doc package, the documentation for NTP is available in the directory /usr/share/doc/packages/ntp-doc. The file /usr/share/doc/packages/ntp-doc/refclock.html provides links to the driver pages describing the driver parameters.

25.5 Clock Synchronization to an External Time Reference (ETR)

Support for clock synchronization to an external time reference (ETR) is available. The external time reference sends an oscillator signal and a synchronization signal every 2**20 (2 to the power of 20) microseconds to keep TOD clocks of all connected servers synchronized.

For availability two ETR units can be connected to a machine. If the clock deviates for more than the sync-check tolerance all CPUs get a machine check that indicates that the clock is out of sync. If this happens, all DASD I/O to XRC enabled devices is stopped until the clock is synchronized again.

The ETR support is activated via two sysfs attributes; run the following commands as root:

echo 1 > /sys/devices/system/etr/etr0/online
echo 1 > /sys/devices/system/etr/etr1/online

26 Sharing File Systems with NFS

  • Filename: net_nfs.xml
  • ID: cha.nfs
Abstract

Distributing and sharing file systems over a network is a common task in corporate environments. The well-proven network file system (NFS) works with NIS, the yellow pages protocol. For a more secure protocol that works with LDAP and Kerberos, check NFSv4 (default). Combined with pNFS, you can eliminate performance bottlenecks.

NFS with NIS makes a network transparent to the user. With NFS, it is possible to distribute arbitrary file systems over the network. With an appropriate setup, users always find themselves in the same environment regardless of the terminal they currently use.

26.1 Terminology

The following are terms used in the YaST module.

Exports

A directory exported by an NFS server, which clients can integrate it into their system.

NFS Client

The NFS client is a system that uses NFS services from an NFS server over the Network File System protocol. The TCP/IP protocol is already integrated into the Linux kernel; there is no need to install any additional software.

NFS Server

The NFS server provides NFS services to clients. A running server depends on the following daemons: nfsd (worker), idmapd (ID-to-name mapping for NFSv4, needed for certain scenarios only), statd (file locking), and mountd (mount requests).

NFSv3

NFSv3 is the version 3 implementation, the old stateless NFS that supports client authentication.

NFSv4

NFSv4 is the new version 4 implementation that supports secure user authentication via kerberos. NFSv4 requires one single port only and thus is better suited for environments behind a firewall than NFSv3.

The protocol is specified as http://tools.ietf.org/html/rfc3530.

pNFS

Parallel NFS, a protocol extension of NFSv4. Any pNFS clients can directly access the data on an NFS server.

26.2 Installing NFS Server

For installing and configuring an NFS server, see the SUSE Linux Enterprise Server documentation.

26.3 Configuring Clients

To configure your host as an NFS client, you do not need to install additional software. All needed packages are installed by default.

26.3.1 Importing File Systems with YaST

Authorized users can mount NFS directories from an NFS server into the local file tree using the YaST NFS client module. Proceed as follows:

Procedure 26.1: Importing NFS Directories
  1. Start the YaST NFS client module.

  2. Click Add in the NFS Shares tab. Enter the host name of the NFS server, the directory to import, and the mount point at which to mount this directory locally.

  3. When using NFSv4, select Enable NFSv4 in the NFS Settings tab. Additionally, the NFSv4 Domain Name must contain the same value as used by the NFSv4 server. The default domain is localdomain.

  4. To use Kerberos authentication for NFS, GSS security must be enabled. Select Enable GSS Security.

  5. Enable Open Port in Firewall in the NFS Settings tab if you use a Firewall and want to allow access to the service from remote computers. The firewall status is displayed next to the check box.

  6. Click OK to save your changes.

The configuration is written to /etc/fstab and the specified file systems are mounted. When you start the YaST configuration client at a later time, it also reads the existing configuration from this file.

Tip
Tip: NFS as a Root File System

On (diskless) systems, where the root partition is mounted via network as an NFS share, you need to be careful when configuring the network device with which the NFS share is accessible.

When shutting down or rebooting the system, the default processing order is to turn off network connections, then unmount the root partition. With NFS root, this order causes problems as the root partition cannot be cleanly unmounted as the network connection to the NFS share is already not activated. To prevent the system from deactivating the relevant network device, open the network device configuration tab as described in Section 17.4.1.2.5, “Activating the Network Device” and choose On NFSroot in the Device Activation pane.

26.3.2 Importing File Systems Manually

The prerequisite for importing file systems manually from an NFS server is a running RPC port mapper. The nfs service takes care to start it properly; thus, start it by entering systemctl start nfs as root. Then remote file systems can be mounted in the file system like local partitions using mount:

tux > sudo mount HOST:REMOTE-PATHLOCAL-PATH

To import user directories from the nfs.example.com machine, for example, use:

tux > sudo mount nfs.example.com:/home /home

26.3.2.1 Using the Automount Service

The autofs daemon can be used to mount remote file systems automatically. Add the following entry to the /etc/auto.master file:

/nfsmounts /etc/auto.nfs

Now the /nfsmounts directory acts as the root for all the NFS mounts on the client if the auto.nfs file is filled appropriately. The name auto.nfs is chosen for the sake of convenience—you can choose any name. In auto.nfs add entries for all the NFS mounts as follows:

localdata -fstype=nfs server1:/data
nfs4mount -fstype=nfs4 server2:/

Activate the settings with systemctl start autofs as root. In this example, /nfsmounts/localdata, the /data directory of server1, is mounted with NFS and /nfsmounts/nfs4mount from server2 is mounted with NFSv4.

If the /etc/auto.master file is edited while the service autofs is running, the automounter must be restarted for the changes to take effect with systemctl restart autofs.

26.3.2.2 Manually Editing /etc/fstab

A typical NFSv3 mount entry in /etc/fstab looks like this:

nfs.example.com:/data /local/path nfs rw,noauto 0 0

For NFSv4 mounts, use nfs4 instead of nfs in the third column:

nfs.example.com:/data /local/pathv4 nfs4 rw,noauto 0 0

The noauto option prevents the file system from being mounted automatically at start-up. If you want to mount the respective file system manually, it is possible to shorten the mount command specifying the mount point only:

tux > sudo mount /local/path
Note
Note: Mounting at Start-Up

If you do not enter the noauto option, the init scripts of the system will handle the mount of those file systems at start-up.

26.3.3 Parallel NFS (pNFS)

NFS is one of the oldest protocols, developed in the '80s. As such, NFS is usually sufficient if you want to share small files. However, when you want to transfer big files or large numbers of clients want to access data, an NFS server becomes a bottleneck and has a significant impact on the system performance. This is because of files quickly getting bigger, whereas the relative speed of your Ethernet has not fully kept up.

When you request a file from a regular NFS server, the server looks up the file metadata, collects all the data and transfers it over the network to your client. However, the performance bottleneck becomes apparent no matter how small or big the files are:

  • With small files most of the time is spent collecting the metadata.

  • With big files most of the time is spent on transferring the data from server to client.

pNFS, or parallel NFS, overcomes this limitation as it separates the file system metadata from the location of the data. As such, pNFS requires two types of servers:

  • A metadata or control server that handles all the non-data traffic

  • One or more storage server(s) that hold(s) the data

The metadata and the storage servers form a single, logical NFS server. When a client wants to read or write, the metadata server tells the NFSv4 client which storage server to use to access the file chunks. The client can access the data directly on the server.

SUSE Linux Enterprise Desktop supports pNFS on the client side only.

26.3.3.1 Configuring pNFS Client With YaST

Proceed as described in Procedure 26.1, “Importing NFS Directories”, but click the pNFS (v4.1) check box and optionally NFSv4 share. YaST will do all the necessary steps and will write all the required options in the file /etc/exports.

26.3.3.2 Configuring pNFS Client Manually

Refer to Section 26.3.2, “Importing File Systems Manually” to start. Most of the configuration is done by the NFSv4 server. For pNFS, the only difference is to add the minorversion option and the metadata server MDS_SERVER to your mount command:

tux > sudo mount -t nfs4 -o minorversion=1 MDS_SERVER MOUNTPOINT

To help with debugging, change the value in the /proc file system:

tux > sudo echo 32767 > /proc/sys/sunrpc/nfsd_debug
tux > sudo echo 32767 > /proc/sys/sunrpc/nfs_debug

26.4 For More Information

In addition to the man pages of exports, nfs, and mount, information about configuring an NFS server and client is available in /usr/share/doc/packages/nfsidmap/README. For further documentation online refer to the following Web sites:

27 Samba

  • Filename: net_samba.xml
  • ID: cha.samba
Abstract

Using Samba, a Unix machine can be configured as a file and print server for macOS, Windows, and OS/2 machines. Samba has developed into a fully-fledged and rather complex product. Configure Samba with YaST, or by editing the configuration file manually.

27.1 Terminology

The following are some terms used in Samba documentation and in the YaST module.

SMB protocol

Samba uses the SMB (server message block) protocol that is based on the NetBIOS services. Microsoft released the protocol so other software manufacturers could establish connections to a Microsoft domain network. With Samba, the SMB protocol works on top of the TCP/IP protocol, so the TCP/IP protocol must be installed on all clients.

CIFS protocol

CIFS (common Internet file system) protocol is another protocol supported by Samba. CIFS defines a standard remote file system access protocol for use over the network, enabling groups of users to work together and share documents across the network.

NetBIOS

NetBIOS is a software interface (API) designed for communication between machines providing a name service. It enables machines connected to the network to reserve names for themselves. After reservation, these machines can be addressed by name. There is no central process that checks names. Any machine on the network can reserve as many names as it wants as long as the names are not already in use. The NetBIOS interface can be implemented for different network architectures. An implementation that works relatively closely with network hardware is called NetBEUI, but this is often called NetBIOS. Network protocols implemented with NetBIOS are IPX from Novell (NetBIOS via TCP/IP) and TCP/IP.

The NetBIOS names sent via TCP/IP have nothing in common with the names used in /etc/hosts or those defined by DNS. NetBIOS uses its own, completely independent naming convention. However, it is recommended to use names that correspond to DNS host names to make administration easier or use DNS natively. This is the default used by Samba.

Samba server

Samba server provides SMB/CIFS services and NetBIOS over IP naming services to clients. For Linux, there are three daemons for Samba server: smbd for SMB/CIFS services, nmbd for naming services, and winbind for authentication.

Samba client

The Samba client is a system that uses Samba services from a Samba server over the SMB protocol. Common operating systems, such as Windows and macOS support the SMB protocol. The TCP/IP protocol must be installed on all computers. Samba provides a client for the different Unix flavors. For Linux, there is a kernel module for SMB that allows the integration of SMB resources on the Linux system level. You do not need to run any daemon for the Samba client.

Shares

SMB servers provide resources to the clients by means of shares. Shares are printers and directories with their subdirectories on the server. It is exported by means of a name and can be accessed by its name. The share name can be set to any name—it does not need to be the name of the export directory. A printer is also assigned a name. Clients can access the printer by its name.

DC

A domain controller (DC) is a server that handles accounts in a domain. For data replication, additional domain controllers are available in one domain.

27.2 Installing a Samba Server

To install a Samba server, start YaST and select Software › Software Management. Choose View › Patterns and select File Server. Confirm the installation of the required packages to finish the installation process.

27.3 Configuring a Samba Server

For configuring a Samba server, see the SUSE Linux Enterprise Server documentation.

27.4 Configuring Clients

Clients can only access the Samba server via TCP/IP. NetBEUI and NetBIOS via IPX cannot be used with Samba.

27.4.1 Configuring a Samba Client with YaST

Configure a Samba client to access resources (files or printers) on the Samba or Windows server. Enter the NT or Active Directory domain or workgroup in the dialog Network Services › Windows Domain Membership. If you activate Also Use SMB Information for Linux Authentication, the user authentication runs over the Samba, NT or Kerberos server.

Click Expert Settings for advanced configuration options. For example, use the Mount Server Directories table to enable mounting server home directory automatically with authentication. This way users can access their home directories when hosted on CIFS. For details, see the pam_mount man page.

After completing all settings, confirm the dialog to finish the configuration.

27.5 Samba as Login Server

In networks where predominantly Windows clients are found, it is often preferable that users may only register with a valid account and password. In a Windows-based network, this task is handled by a primary domain controller (PDC). You can use a Windows NT server configured as PDC, but this task can also be done with a Samba server. The entries that must be made in the [global] section of smb.conf are shown in Example 27.1, “Global Section in smb.conf”.

Example 27.1: Global Section in smb.conf
[global]
    workgroup = WORKGROUP
    domain logons = Yes
    domain master = Yes

It is necessary to prepare user accounts and passwords in an encryption format that conforms with Windows. Do this with the command smbpasswd -a name. Create the domain account for the computers, required by the Windows domain concept, with the following commands:

useradd hostname\$
smbpasswd -a -m hostname

With the useradd command, a dollar sign is added. The command smbpasswd inserts this automatically when the parameter -m is used. The commented configuration example (/usr/share/doc/packages/samba/examples/smb.conf.SUSE) contains settings that automate this task.

add machine script = /usr/sbin/useradd -g nogroup -c "NT Machine Account" \
-s /bin/false %m\$

To make sure that Samba can execute this script correctly, choose a Samba user with the required administrator permissions and add it to the ntadmin group. Then all users belonging to this Linux group can be assigned Domain Admin status with the command:

net groupmap add ntgroup="Domain Admins" unixgroup=ntadmin

27.6 Advanced Topics

This section introduces more advanced techniques to manage both the client and server part of the Samba suite.

27.6.1 Transparent File Compression on Btrfs

Samba allows clients to remotely manipulate file and directory compression flags for shares placed on the Btrfs file system. Windows Explorer provides the ability to flag files/directories for transparent compression via the File › Properties › Advanced dialog:

Windows Explorer Advanced Attributes Dialog
Figure 27.1: Windows Explorer Advanced Attributes Dialog

Files flagged for compression are transparently compressed and decompressed by the underlying file system when accessed or modified. This normally results in storage capacity savings at the expense of extra CPU overhead when accessing the file. New files and directories inherit the compression flag from the parent directory, unless created with the FILE_NO_COMPRESSION option.

Windows Explorer presents compressed files and directories visually differently to those that are not compressed:

Windows Explorer Directory Listing with Compressed Files
Figure 27.2: Windows Explorer Directory Listing with Compressed Files

You can enable Samba share compression either manually by adding

vfs objects = btrfs

to the share configuration in /etc/samba/smb.conf, or using YaST: Network Services › Samba Server › Add, and checking Utilize Btrfs Features.

27.6.2 Snapshots

Snapshots, also called Shadow Copies, are copies of the state of a file system subvolume at a certain point of time. Snapper is the tool to manage these snapshots in Linux. Snapshots are supported on the Btrfs file system or thin-provisioned LVM volumes. The Samba suite supports managing of remote snapshots through the FSRVP protocol on both the server and client side.

27.6.2.1 Previous Versions

Snapshots on a Samba server can be exposed to remote Windows clients as file or directory previous versions.

To enable snapshots on a Samba server, the following conditions must be fulfilled:

  • The SMB network share resides on a Btrfs subvolume.

  • The SMB network share path has a related snapper configuration file. You can create the snapper file with

    snapper -c <cfg_name> create-config /path/to/share

    For more information on snapper, see Chapter 7, System Recovery and Snapshot Management with Snapper.

  • The snapshot directory tree must allow access for relevant users. For more information, see the PERMISSIONS section of the vfs_snapper manual page (man 8 vfs_snapper).

To support remote snapshots, you need to modify the /etc/samba/smb.conf file. You can do it either with YaST › Network Services › Samba Server, or manually by enhancing the relevant share section with

vfs objects = snapper

Note that you need to restart the Samba service for manual smb.conf changes to take effect:

systemctl restart nmb smb
Adding a New Samba Share with Snapshotting Enabled
Figure 27.3: Adding a New Samba Share with Snapshotting Enabled

After being configured, snapshots created by snapper for the Samba share path can be accessed from Windows Explorer from a file or directory's Previous Versions tab.

The Previous Versions tab in Windows Explorer
Figure 27.4: The Previous Versions tab in Windows Explorer

27.6.2.2 Remote Share Snapshots

By default, snapshots can only be created and deleted on the Samba server locally, via the snapper command line utility, or using snapper's time line feature.

Samba can be configured to process share snapshot creation and deletion requests from remote hosts using the File Server Remote VSS Protocol (FSRVP).

In addition to the configuration and prerequisites documented in Section 27.6.2.1, “Previous Versions”, the following global configuration is required in /etc/samba/smb.conf:

[global]
rpc_daemon:fssd = fork
registry shares = yes
include = registry

FSRVP clients, including Samba's rpcclient and Windows Server 2012 DiskShadow.exe, can then instruct Samba to create or delete a snapshot for a given share, and expose the snapshot as a new share.

27.6.2.3 Managing Snapshots Remotely from Linux with rpcclient

The samba-client package contains an FSRVP client that can remotely request a Windows/Samba server to create and expose a snapshot of a given share. You can then use existing tools in SUSE Linux Enterprise Desktop to mount the exposed share and back up its files. Requests to the server are sent using the rpcclient binary.

Example 27.2: Using rpcclient to Request a Windows Server 2012 Share Snapshot

Connect to win-server.example.com server as an administrator in an EXAMPLE domain:

# rpcclient -U 'EXAMPLE\Administrator' ncacn_np:win-server.example.com[ndr64,sign]
Enter EXAMPLE/Administrator's password:

Check that the SMB share is visible for rpcclient:

rpcclient $> netshareenum
netname: windows_server_2012_share
remark:
path:   C:\Shares\windows_server_2012_share
password:       (null)

Check that the SMB share supports snapshot creation:

rpcclient $> fss_is_path_sup windows_server_2012_share \
UNC \\WIN-SERVER\windows_server_2012_share\ supports shadow copy requests

Request the creation of a share snapshot:

rpcclient $> fss_create_expose backup ro windows_server_2012_share
13fe880e-e232-493d-87e9-402f21019fb6: shadow-copy set created
13fe880e-e232-493d-87e9-402f21019fb6(1c26544e-8251-445f-be89-d1e0a3938777): \
\\WIN-SERVER\windows_server_2012_share\ shadow-copy added to set
13fe880e-e232-493d-87e9-402f21019fb6: prepare completed in 0 secs
13fe880e-e232-493d-87e9-402f21019fb6: commit completed in 1 secs
13fe880e-e232-493d-87e9-402f21019fb6(1c26544e-8251-445f-be89-d1e0a3938777): \
share windows_server_2012_share@{1C26544E-8251-445F-BE89-D1E0A3938777} \
exposed as a snapshot of \\WIN-SERVER\windows_server_2012_share\

Confirm that the snapshot share is exposed by the server:

rpcclient $> netshareenum
netname: windows_server_2012_share
remark:
path:   C:\Shares\windows_server_2012_share
password:       (null)

netname: windows_server_2012_share@{1C26544E-8251-445F-BE89-D1E0A3938777}
remark: (null)
path:   \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{F6E6507E-F537-11E3-9404-B8AC6F927453}\Shares\windows_server_2012_share\
password:       (null)

Attempt to delete the snapshot share:

rpcclient $> fss_delete windows_server_2012_share \
13fe880e-e232-493d-87e9-402f21019fb6 1c26544e-8251-445f-be89-d1e0a3938777
13fe880e-e232-493d-87e9-402f21019fb6(1c26544e-8251-445f-be89-d1e0a3938777): \
\\WIN-SERVER\windows_server_2012_share\ shadow-copy deleted

Confirm that the snapshot share has been removed by the server:

rpcclient $> netshareenum
netname: windows_server_2012_share
remark:
path:   C:\Shares\windows_server_2012_share
password:       (null)

27.6.2.4 Managing Snapshots Remotely from Windows with DiskShadow.exe

You can manage snapshots of SMB shares on the Linux Samba server from the Windows environment acting as a client as well. Windows Server 2012 includes the DiskShadow.exe utility that can manage remote shares similar to the rpcclient described in Section 27.6.2.3, “Managing Snapshots Remotely from Linux with rpcclient. Note that you need to carefully set up the Samba server first.

Following is an example procedure to set up the Samba server so that the Windows Server client can manage its share's snapshots. Note that EXAMPLE is the Active Directory domain used in the testing environment, fsrvp-server.example.com is the host name of the Samba server, and /srv/smb is the path to the SMB share.

Procedure 27.1: Detailed Samba Server Configuration
  1. Join Active Directory domain via YaST.

  2. Ensure that the Active Domain DNS entry was correct:

    fsrvp-server:~ # net -U 'Administrator' ads dns register \
    fsrvp-server.example.com <IP address>
    Successfully registered hostname with DNS
  3. Create Btrfs subvolume at /srv/smb

    fsrvp-server:~ # btrfs subvolume create /srv/smb
  4. Create snapper configuration file for path /srv/smb

    fsrvp-server:~ # snapper -c <snapper_config> create-config /srv/smb
  5. Create new share with path /srv/smb, and YaST Expose Snapshots check box enabled. Make sure to add the following snippets to the global section of /etc/samba/smb.conf as mentioned in Section 27.6.2.2, “Remote Share Snapshots”:

    [global]
     rpc_daemon:fssd = fork
     registry shares = yes
     include = registry
  6. Restart Samba with systemctl restart nmb smb

  7. Configure snapper permissions:

    fsrvp-server:~ # snapper -c <snapper_config> set-config \
    ALLOW_USERS="EXAMPLE\\\\Administrator EXAMPLE\\\\win-client$"

    Ensure that any ALLOW_USERS are also permitted traversal of the .snapshots subdirectory.

    fsrvp-server:~ # snapper -c <snapper_config> set-config SYNC_ACL=yes
    Important
    Important: Path Escaping

    Be careful about the '\' escapes! Escape twice to ensure that the value stored in /etc/snapper/configs/<snapper_config> is escaped once.

    "EXAMPLE\win-client$" corresponds to the Windows client computer account. Windows issues initial FSRVP requests while authenticated with this account.

  8. Grant Windows client account necessary privileges:

    fsrvp-server:~ # net -U 'Administrator' rpc rights grant \
    "EXAMPLE\\win-client$" SeBackupPrivilege
    Successfully granted rights.

    The previous command is not needed for the "EXAMPLE\Administrator" user, which has privileges already granted.

Procedure 27.2: Windows Client Setup and DiskShadow.exe in Action
  1. Boot Windows Server 2012 (example host name WIN-CLIENT).

  2. Join the same Active Directory domain EXAMPLE as with the SUSE Linux Enterprise Desktop.

  3. Reboot.

  4. Open Powershell.

  5. Start DiskShadow.exe and begin the backup procedure:

    PS C:\Users\Administrator.EXAMPLE> diskshadow.exe
    Microsoft DiskShadow version 1.0
    Copyright (C) 2012 Microsoft Corporation
    On computer:  WIN-CLIENT,  6/17/2014 3:53:54 PM
    
    DISKSHADOW> begin backup
  6. Specify that shadow copy persists across program exit, reset or reboot:

    DISKSHADOW> set context PERSISTENT
  7. Check whether the specified share supports snapshots, and create one:

    DISKSHADOW> add volume \\fsrvp-server\sles_snapper
    
    DISKSHADOW> create
    Alias VSS_SHADOW_1 for shadow ID {de4ddca4-4978-4805-8776-cdf82d190a4a} set as \
     environment variable.
    Alias VSS_SHADOW_SET for shadow set ID {c58e1452-c554-400e-a266-d11d5c837cb1} \
     set as environment variable.
    
    Querying all shadow copies with the shadow copy set ID \
     {c58e1452-c554-400e-a266-d11d5c837cb1}
    
     * Shadow copy ID = {de4ddca4-4978-4805-8776-cdf82d190a4a}     %VSS_SHADOW_1%
        - Shadow copy set: {c58e1452-c554-400e-a266-d11d5c837cb1}  %VSS_SHADOW_SET%
        - Original count of shadow copies = 1
        - Original volume name: \\FSRVP-SERVER\SLES_SNAPPER\ \
          [volume not on this machine]
        - Creation time: 6/17/2014 3:54:43 PM
        - Shadow copy device name:
          \\FSRVP-SERVER\SLES_SNAPPER@{31afd84a-44a7-41be-b9b0-751898756faa}
        - Originating machine: FSRVP-SERVER
        - Service machine: win-client.example.com
        - Not exposed
        - Provider ID: {89300202-3cec-4981-9171-19f59559e0f2}
        - Attributes:  No_Auto_Release Persistent FileShare
    
    Number of shadow copies listed: 1
  8. Finish the backup procedure:

    DISKSHADOW> end backup
  9. After the snapshot was created, try to delete it and verify the deletion:

    DISKSHADOW> delete shadows volume \\FSRVP-SERVER\SLES_SNAPPER\
    Deleting shadow copy {de4ddca4-4978-4805-8776-cdf82d190a4a} on volume \
     \\FSRVP-SERVER\SLES_SNAPPER\ from provider \
    {89300202-3cec-4981-9171-19f59559e0f2} [Attributes: 0x04000009]...
    
    Number of shadow copies deleted: 1
    
    DISKSHADOW> list shadows all
    
    Querying all shadow copies on the computer ...
    No shadow copies found in system.

27.7 For More Information

Documentation for Samba ships with the samba-doc package which is not installed by default. Install it with zypper install samba-doc. Enter apropos samba at the command line to display some manual pages or browse the /usr/share/doc/packages/samba directory for more online documentation and examples. Find a commented example configuration (smb.conf.SUSE) in the examples subdirectory. Another file to look for Samba related information is /usr/share/doc/packages/samba/README.SUSE.

The Samba HOWTO (see https://wiki.samba.org) provided by the Samba team includes a section about troubleshooting. In addition to that, Part V of the document provides a step-by-step guide to checking your configuration.

28 On-Demand Mounting with Autofs

  • Filename: autofs.xml
  • ID: cha.autofs
Abstract

autofs is a program that automatically mounts specified directories on an on-demand basis. It is based on a kernel module for high efficiency, and can manage both local directories and network shares. These automatic mount points are mounted only when they are accessed, and unmounted after a certain period of inactivity. This on-demand behavior saves bandwidth and results in better performance than static mounts managed by /etc/fstab. While autofs is a control script, automount is the command (daemon) that does the actual auto-mounting.

28.1 Installation

autofs is not installed on SUSE Linux Enterprise Desktop by default. To use its auto-mounting capabilities, first install it with

sudo zypper install autofs

28.2 Configuration

You need to configure autofs manually by editing its configuration files with a text editor, such as vim. There are two basic steps to configure autofs—the master map file, and specific map files.

28.2.1 The Master Map File

The default master configuration file for autofs is /etc/auto.master. You can change its location by changing the value of the DEFAULT_MASTER_MAP_NAME option in /etc/sysconfig/autofs. Here is the content of the default one for SUSE Linux Enterprise Desktop:

#
# Sample auto.master file
# This is an automounter map and it has the following format
# key [ -mount-options-separated-by-comma ] location
# For details of the format look at autofs(5).1
#
#/misc  /etc/auto.misc2
#/net -hosts
#
# Include /etc/auto.master.d/*.autofs3
#
#+dir:/etc/auto.master.d
#
# Include central master map if it can be found using
# nsswitch sources.
#
# Note that if there are entries for /net or /misc (as
# above) in the included master map any keys that are the
# same will not be seen as the first read key seen takes
# precedence.
#
+auto.master4

1

The autofs manual page (man 5 autofs) offers a lot of valuable information on the format of the automounter maps.

2

Although commented out (#) by default, this is an example of a simple automounter mapping syntax.

3

In case you need to split the master map into several files, uncomment the line, and put the mappings (suffixed with .autofs) in the /etc/auto.master.d/ directory.

4

+auto.master ensures that those using NIS will still find their master map.

Entries in auto.master have three fields with the following syntax:

mount point      map name      options
mount point

The base location where to mount the autofs file system, such as /home.

map name

The name of a map source to use for mounting. For the syntax of the maps files, see Section 28.2.2, “Map Files”.

options

These options (if specified) will apply as defaults to all entries in the given map.

Tip
Tip: For More Information

For more detailed information on the specific values of the optional map-type, format, and options, see the auto.master manual page (man 5 auto.master).

The following entry in auto.master tells autofs to look in /etc/auto.smb, and create mount points in the /smb directory.

/smb   /etc/auto.smb

28.2.1.1 Direct Mounts

Direct mounts create a mount point at the path specified inside the relevant map file. Instead of specifying the mount point in auto.master, replace the mount point field with /-. For example, the following line tells autofs to create a mount point at the place specified in auto.smb:

/-        /etc/auto.smb
Tip
Tip: Maps without Full Path

If the map file is not specified with its full local or network path, it is located using the Name Service Switch (NSS) configuration:

/-        auto.smb

28.2.2 Map Files

Important
Important: Other Types of Maps

Although files are the most common types of maps for auto-mounting with autofs, there are other types as well. A map specification can be the output of a command, or a result of a query in LDAP or database. For more detailed information on map types, see the manual page man 5 auto.master.

Map files specify the (local or network) source location, and the mount point where to mount the source locally. The general format of maps is similar to the master map. The difference is that the options appear between the mount point and the location instead of at the end of the entry:

mount point      options      location

Make sure that map files are not marked as executable. You can remove the executable bits by executing chmod -x MAP_FILE.

mount point

Specifies where to mount the source location. This can be either a single directory name (so-called indirect mount) to be added to the base mount point specified in auto.master, or the full path of the mount point (direct mount, see Section 28.2.1.1, “Direct Mounts”).

options

Specifies optional comma-separated list of mount options for the relevant entries. If auto.master contains options for this map file as well, theses are appended.

location

Specifies from where the file system is to be mounted. It is usually an NFS or SMB volume in the usual notation host_name:path_name. If the file system to be mounted begins with a '/' (such as local /dev entries or smbfs shares), a colon symbol ':' needs to be prefixed, such as :/dev/sda1.

28.3 Operation and Debugging

This section introduces information on how to control the autofs service operation, and how to view more debugging information when tuning the automounter operation.

28.3.1 Controlling the autofs Service

The operation of the autofs service is controlled by systemd. The general syntax of the systemctl command for autofs is

sudo systemctl SUB_COMMAND autofs

where SUB_COMMAND is one of:

enable

Starts the automounter daemon at boot.

start

Starts the automounter daemon.

stop

Stops the automounter daemon. Automatic mount points are not accessible.

status

Prints the current status of the autofs service together with a part of a relevant log file.

restart

Stops and starts the automounter, terminating all running daemons and starting new ones.

reload

Checks the current auto.master map, restarts those daemons whose entries have changed, and starts new ones for new entries.

28.3.2 Debugging the Automounter Problems

If you experience problems when mounting directories with autofs, it is useful to run the automount daemon manually and watch its output messages:

  1. Stop autofs.

    sudo systemctl stop autofs
  2. From one terminal, run automount manually in the foreground, producing verbose output.

    sudo automount -f -v
  3. From another terminal, try to mount the auto-mounting file systems by accessing the mount points (for example by cd or ls).

  4. Check the output of automount from the first terminal for more information why the mount failed, or why it was not even attempted.

28.4 Auto-Mounting an NFS Share

The following procedure illustrates how to configure autofs to auto-mount an NFS share available on your network. It makes use of the information mentioned above, and assumes you are familiar with NFS exports. For more information on NFS, see Chapter 26, Sharing File Systems with NFS.

  1. Edit the master map file /etc/auto.master:

    sudo vim /etc/auto.master

    Add a new entry for the new NFS mount at the end of /etc/auto.master:

    /nfs      /etc/auto.nfs      --timeout=10

    It tells autofs that the base mount point is /nfs, the NFS shares are specified in the /etc/auto.nfs map, and that all shares in this map will be automatically unmounted after 10 seconds of inactivity.

  2. Create a new map file for NFS shares:

    sudo vim /etc/auto.nfs

    /etc/auto.nfs normally contains a separate line for each NFS share. Its format is described in Section 28.2.2, “Map Files”. Add the line describing the mount point and the NFS share network address:

    export      jupiter.com:/home/geeko/doc/export

    The above line means that the /home/geeko/doc/export directory on the jupiter.com host will be auto-mounted to the /nfs/export directory on the local host (/nfs is taken from the auto.master map) when requested. The /nfs/export directory will be created automatically by autofs.

  3. Optionally comment out the related line in /etc/fstab if you previously mounted the same NFS share statically. The line should look similar to this:

    #jupiter.com:/home/geeko/doc/export /nfs/export nfs defaults 0 0
  4. Reload autofs and check if it works:

    sudo systemctl restart autofs
    # ls -l /nfs/export
    total 20
    drwxr-xr-x  6 1001 users 4096 Oct 25 08:56 ./
    drwxr-xr-x  3 root root     0 Apr  1 09:47 ../
    drwxr-xr-x  5 1001 users 4096 Jan 14  2013 .images/
    drwxr-xr-x 10 1001 users 4096 Aug 16  2013 .profiled/
    drwxr-xr-x  3 1001 users 4096 Aug 30  2013 .tmp/
    drwxr-xr-x  4 1001 users 4096 Oct 25 08:56 SLE-12-manual/

    If you can see the list of files on the remote share, then autofs is functioning.

28.5 Advanced Topics

This section describes topics that are beyond the basic introduction to autofs—auto-mounting of NFS shares that are available on your network, using wild cards in map files, and information specific to the CIFS file system.

28.5.1 /net Mount Point

This helper mount point is useful if you use a lot of NFS shares. /net auto-mounts all NFS shares on your local network on demand. The entry is already present in the auto.master file, so all you need to do is uncomment it and restart autofs:

/net      -hosts
systemctl restart autofs

For example, if you have a server named jupiter with an NFS share called /export, you can mount it by typing

# cd /net/jupiter/export

on the command line.

28.5.2 Using Wild Cards to Auto-Mount Subdirectories

If you have a directory with subdirectories that you need to auto-mount individually—the typical case is the /home directory with individual users' home directories inside— autofs offers a clever solution for that.

In case of home directories, add the following line in auto.master:

/home      /etc/auto.home

Now you need to add the correct mapping to the /etc/auto.home file, so that the users' home directories are mounted automatically. One solution is to create separate entries for each directory:

wilber      jupiter.com:/home/wilber
penguin      jupiter.com:/home/penguin
tux      jupiter.com:/home/tux
[...]

This is very awkward as you need to manage the list of users inside auto.home. You can use the asterisk '*' instead of the mount point, and the ampersand '&' instead of the directory to be mounted:

*      jupiter:/home/&

28.5.3 Auto-Mounting CIFS File System

If you want to auto-mount an SMB/CIFS share (see Chapter 27, Samba for more information on the SMB/CIFS protocol), you need to modify the syntax of the map file. Add -fstype=cifs in the option field, and prefix the share location with a colon ':'.

mount point      -fstype=cifs      ://jupiter.com/export

Part V Mobile Computers

29 Mobile Computing with Linux

Mobile computing is mostly associated with laptops, PDAs and cellular phones (and the data exchange between them). Mobile hardware components, such as external hard disks, flash disks, or digital cameras, can be connected to laptops or desktop systems. A number of software components are involved in mobile computing scenarios and some applications are tailor-made for mobile use.

30 Using NetworkManager

NetworkManager is the ideal solution for laptops and other portable computers. It supports state-of-the-art encryption types and standards for network connections, including connections to 802.1X protected networks. 802.1X is the “IEEE Standard for Local and Metropolitan Area Networks—Port-Based Net…

31 Power Management

Power management is especially important on laptop computers, but is also useful on other systems. ACPI (Advanced Configuration and Power Interface) is available on all modern computers (laptops, desktops, and servers). Power management technologies require suitable hardware and BIOS routines. Most …

29 Mobile Computing with Linux

  • Filename: mobile.xml
  • ID: cha.mobile
Abstract

Mobile computing is mostly associated with laptops, PDAs and cellular phones (and the data exchange between them). Mobile hardware components, such as external hard disks, flash disks, or digital cameras, can be connected to laptops or desktop systems. A number of software components are involved in mobile computing scenarios and some applications are tailor-made for mobile use.

29.1 Laptops

The hardware of laptops differs from that of a normal desktop system. This is because criteria like exchangeability, space requirements and power consumption must be taken into account. The manufacturers of mobile hardware have developed standard interfaces like PCMCIA (Personal Computer Memory Card International Association), Mini PCI and Mini PCIe that can be used to extend the hardware of laptops. The standards cover memory cards, network interface cards, and external hard disks.

29.1.1 Power Conservation

The inclusion of energy-optimized system components during laptop manufacturing contributes to their suitability for use without access to the electrical power grid. Their contribution to conservation of power is at least as important as that of the operating system. SUSE® Linux Enterprise Desktop supports various methods that control the power consumption of a laptop and have varying effects on the operating time under battery power. The following list is in descending order of contribution to power conservation:

  • Throttling the CPU speed.

  • Switching off the display illumination during pauses.

  • Manually adjusting the display illumination.

  • Disconnecting unused, hotplug-enabled accessories (USB CD-ROM, external mouse, unused PCMCIA cards, Wi-Fi, etc.).

  • Spinning down the hard disk when idling.

Detailed background information about power management in SUSE Linux Enterprise Desktop is provided in Chapter 31, Power Management.

29.1.2 Integration in Changing Operating Environments

Your system needs to adapt to changing operating environments when used for mobile computing. Many services depend on the environment and the underlying clients must be reconfigured. SUSE Linux Enterprise Desktop handles this task for you.

Integrating a Mobile Computer in an Existing Environment
Figure 29.1: Integrating a Mobile Computer in an Existing Environment

The services affected in the case of a laptop commuting back and forth between a small home network and an office network are:

Network

This includes IP address assignment, name resolution, Internet connectivity and connectivity to other networks.

Printing

A current database of available printers and an available print server must be present, depending on the network.

E-Mail and Proxies

As with printing, the list of the corresponding servers must be current.

X (Graphical Environment)

If your laptop is temporarily connected to a projector or an external monitor, different display configurations must be available.

SUSE Linux Enterprise Desktop offers several ways of integrating laptops into existing operating environments:

NetworkManager

NetworkManager is especially tailored for mobile networking on laptops. It provides a means to easily and automatically switch between network environments or different types of networks such as mobile broadband (such as GPRS, EDGE, or 3G), wireless LAN, and Ethernet. NetworkManager supports WEP and WPA-PSK encryption in wireless LANs. It also supports dial-up connections. The GNOME desktop includes a front-end for NetworkManager. For more information, see Section 30.3, “Configuring Network Connections”.

Table 29.1: Use Cases for NetworkManager

My computer…

Use NetworkManager

is a laptop

Yes

is sometimes attached to different networks

Yes

provides network services (such as DNS or DHCP)

No

only uses a static IP address

No

Use the YaST tools to configure networking whenever NetworkManager should not handle network configuration.

Tip
Tip: DNS Configuration and Various Types of Network Connections

If you travel frequently with your laptop and change different types of network connections, NetworkManager works fine when all DNS addresses are assigned correctly assigned with DHCP. If some connections use static DNS address(es), add it to the NETCONFIG_DNS_STATIC_SERVERS option in /etc/sysconfig/network/config.

SLP

The service location protocol (SLP) simplifies the connection of a laptop to an existing network. Without SLP, the administrator of a laptop usually requires detailed knowledge of the services available in a network. SLP broadcasts the availability of a certain type of service to all clients in a local network. Applications that support SLP can process the information dispatched by SLP and be configured automatically. SLP can also be used to install a system, minimizing the effort of searching for a suitable installation source.

29.1.3 Software Options

There are various task areas in mobile use that are covered by dedicated software: system monitoring (especially the battery charge), data synchronization, and wireless communication with peripherals and the Internet. The following sections cover the most important applications that SUSE Linux Enterprise Desktop provides for each task.

29.1.3.1 System Monitoring

Two system monitoring tools are provided by SUSE Linux Enterprise Desktop:

Power Management

Power Management is an application that lets you adjust the energy saving related behavior of the GNOME desktop. You can typically access it via Computer › Control Center › System › Power Management.

System Monitor

The System Monitor gathers measurable system parameters into one monitoring environment. It presents the output information in three tabs by default. Processes gives detailed information about currently running processes, such as CPU load, memory usage, or process ID number and priority. The presentation and filtering of the collected data can be customized—to add a new type of process information, left-click the process table header and choose which column to hide or add to the view. It is also possible to monitor different system parameters in various data pages or collect the data of various machines in parallel over the network. The Resources tab shows graphs of CPU, memory and network history and the File System tab lists all partitions and their usage.

29.1.3.2 Synchronizing Data

When switching between working on a mobile machine disconnected from the network and working at a networked workstation in an office, it is necessary to keep processed data synchronized across all instances. This could include e-mail folders, directories and individual files that need to be present for work on the road and at the office. The solution in both cases is as follows:

Synchronizing E-Mail

Use an IMAP account for storing your e-mails in the office network. Then access the e-mails from the workstation using any disconnected IMAP-enabled e-mail client, like Mozilla Thunderbird or Evolution as described in GNOME User Guide. The e-mail client must be configured so that the same folder is always accessed for Sent messages. This ensures that all messages are available along with their status information after the synchronization process has completed. Use an SMTP server implemented in the mail client for sending messages instead of the system-wide MTA postfix or sendmail to receive reliable feedback about unsent mail.

Synchronizing Files and Directories

There are several utilities suitable for synchronizing data between a laptop and a workstation. One of the most widely used is a command-line tool called rsync. For more information, see its manual page (man 1 rsync).

29.1.3.3 Wireless Communication: Wi-Fi

With the largest range of these wireless technologies, Wi-Fi is the only one suitable for the operation of large and sometimes even spatially separate networks. Single machines can connect with each other to form an independent wireless network or access the Internet. Devices called access points act as base stations for Wi-Fi-enabled devices and act as intermediaries for access to the Internet. A mobile user can switch among access points depending on location and which access point is offering the best connection. Like in cellular telephony, a large network is available to Wi-Fi users without binding them to a specific location for accessing it.

Wi-Fi cards communicate using the 802.11 standard, prepared by the IEEE organization. Originally, this standard provided for a maximum transmission rate of 2 Mbit/s. Meanwhile, several supplements have been added to increase the data rate. These supplements define details such as the modulation, transmission output, and transmission rates (see Table 29.2, “Overview of Various Wi-Fi Standards”). Additionally, many companies implement hardware with proprietary or draft features.

Table 29.2: Overview of Various Wi-Fi Standards

Name (802.11)

Frequency (GHz)

Maximum Transmission Rate (Mbit/s)

Note

a

5

54

Less interference-prone

b

2.4

11

Less common

g

2.4

54

Widespread, backward-compatible with 11b

n

2.4 and/or 5

300

Common

ac

5

up to ~865

Expected to be common in 2015

ad

60

up to appr. 7000

Released 2012, currently less common; not supported in SUSE Linux Enterprise Desktop

802.11 Legacy cards are not supported by SUSE® Linux Enterprise Desktop. Most cards using 802.11 a/b/g/n are supported. New cards usually comply with the 802.11n standard, but cards using 802.11g are still available.

29.1.3.3.1 Operating Modes

In wireless networking, various techniques and configurations are used to ensure fast, high-quality, and secure connections. Usually your Wi-Fi card operates in managed mode. However, different operating types need different setups. Wireless networks can be classified into four network modes:

Managed Mode (Infrastructure Mode), via Access Point (default mode)

Managed networks have a managing element: the access point. In this mode (also called infrastructure or default mode), all connections of the Wi-Fi stations in the network run through the access point, which may also serve as a connection to an Ethernet. To make sure only authorized stations can connect, various authentication mechanisms (WPA, etc.) are used. This is also the main mode that consumes the least amount of energy.

Ad-hoc Mode (Peer-to-Peer Network)

Ad-hoc networks do not have an access point. The stations communicate directly with each other, therefore an ad-hoc network is usually slower than a managed network. However, the transmission range and number of participating stations are greatly limited in ad-hoc networks. They also do not support WPA authentication. Additionally, not all cards support ad-hoc mode reliably.

Master Mode

In master mode, your Wi-Fi card is used as the access point, assuming your card supports this mode. Find out the details of your Wi-Fi card at http://linux-wless.passys.nl.

Mesh Mode

Wireless mesh networks are organized in a mesh topology. A wireless mesh network's connection is spread among all wireless mesh nodes. Each node belonging to this network is connected to other nodes to share the connection, possibly over a large area.

29.1.3.3.2 Authentication

Because a wireless network is much easier to intercept and compromise than a wired network, the various standards include authentication and encryption methods.

Old Wi-Fi cards support only WEP (Wired Equivalent Privacy). However, because WEP has proven to be insecure, the Wi-Fi industry has defined an extension called WPA, which is supposed to eliminate the weaknesses of WEP. WPA, sometimes synonymous with WPA2, should be the default authentication method.

Usually the user cannot choose the authentication method. For example, when a card operates in managed mode the authentication is set by the access point. NetworkManager shows the authentication method.

29.1.3.3.3 Encryption

There are various encryption methods to ensure that no unauthorized person can read the data packets that are exchanged in a wireless network or gain access to the network:

WEP (defined in IEEE 802.11)

This standard uses the RC4 encryption algorithm, originally with a key length of 40 bits, later also with 104 bits. Often, the length is declared as 64 bits or 128 bits, depending on whether the 24 bits of the initialization vector are included. However, this standard has some weaknesses. Attacks against the keys generated by this system may be successful. Nevertheless, it is better to use WEP than not to encrypt the network.

Some vendors have implemented the non-standard Dynamic WEP. It works exactly as WEP and shares the same weaknesses, except that the key is periodically changed by a key management service.

TKIP (defined in WPA/IEEE 802.11i)

This key management protocol defined in the WPA standard uses the same encryption algorithm as WEP, but eliminates its weakness. Because a new key is generated for every data packet, attacks against these keys are fruitless. TKIP is used together with WPA-PSK.

CCMP (defined in IEEE 802.11i)

CCMP describes the key management. Usually, it is used in connection with WPA-EAP, but it can also be used with WPA-PSK. The encryption takes place according to AES and is stronger than the RC4 encryption of the WEP standard.

29.1.3.4 Wireless Communication: Bluetooth

Bluetooth has the broadest application spectrum of all wireless technologies. It can be used for communication between computers (laptops) and PDAs or cellular phones, as can IrDA. It can also be used to connect various computers within range. Bluetooth is also used to connect wireless system components, like a keyboard or a mouse. The range of this technology is, however, not sufficient to connect remote systems to a network. Wi-Fi is the technology of choice for communicating through physical obstacles like walls.

29.1.3.5 Wireless Communication: IrDA

IrDA is the wireless technology with the shortest range. Both communication parties must be within viewing distance of each other. Obstacles like walls cannot be overcome. One possible application of IrDA is the transmission of a file from a laptop to a cellular phone. The short path from the laptop to the cellular phone is then covered using IrDA. Long-range transmission of the file to the recipient is handled by the mobile network. Another application of IrDA is the wireless transmission of printing jobs in the office.

29.1.4 Data Security

Ideally, you protect data on your laptop against unauthorized access in multiple ways. Possible security measures can be taken in the following areas:

Protection against Theft

Always physically secure your system against theft whenever possible. Various securing tools (like chains) are available in retail stores.

Strong Authentication

Use biometric authentication in addition to standard authentication via login and password. SUSE Linux Enterprise Desktop supports fingerprint authentication.

Securing Data on the System

Important data should not only be encrypted during transmission, but also on the hard disk. This ensures its safety in case of theft. The creation of an encrypted partition with SUSE Linux Enterprise Desktop is described in Chapter 11, Encrypting Partitions and Files. Another possibility is to create encrypted home directories when adding the user with YaST.

Important
Important: Data Security and Suspend to Disk

Encrypted partitions are not unmounted during a suspend to disk event. Thus, all data on these partitions is available to any party who manages to steal the hardware and issue a resume of the hard disk.

Network Security

Any transfer of data should be secured, no matter how the transfer is done. Find general security issues regarding Linux and networks in Chapter 1, Security and Confidentiality.

29.2 Mobile Hardware

SUSE Linux Enterprise Desktop supports the automatic detection of mobile storage devices over FireWire (IEEE 1394) or USB. The term mobile storage device applies to any kind of FireWire or USB hard disk, flash disk, or digital camera. These devices are automatically detected and configured when they are connected with the system over the corresponding interface. The file manager of GNOME offers flexible handling of mobile hardware items. To unmount any of these media safely, use the Unmount Volume (GNOME) feature of the file manager. For more details refer to GNOME User Guide.

External Hard Disks (USB and FireWire)

When an external hard disk is correctly recognized by the system, its icon appears in the file manager. Clicking the icon displays the contents of the drive. It is possible to create directories and files here and edit or delete them. To rename a hard disk, select the corresponding menu item from the right-click contextual menu. This name change is limited to display in the file manager. The descriptor by which the device is mounted in /media remains unaffected.

USB Flash Disks

These devices are handled by the system like external hard disks. It is similarly possible to rename the entries in the file manager.

Digital Cameras (USB and FireWire)

Digital cameras recognized by the system also appear as external drives in the overview of the file manager. The images can then be processed using Shotwell. For advanced photo processing use The GIMP. For a short introduction to The GIMP, see Chapter 18, GIMP: Manipulating Graphics.

29.3 Cellular Phones and PDAs

A desktop system or a laptop can communicate with a cellular phone via Bluetooth or IrDA. Some models support both protocols and some only one of the two. The usage areas for the two protocols and the corresponding extended documentation has already been mentioned in Section 29.1.3.3, “Wireless Communication: Wi-Fi”. The configuration of these protocols on the cellular phones themselves is described in their manuals.

29.4 For More Information

The central point of reference for all questions regarding mobile devices and Linux is http://tuxmobil.org/. Various sections of that Web site deal with the hardware and software aspects of laptops, PDAs, cellular phones and other mobile hardware.

A similar approach to that of http://tuxmobil.org/ is made by http://www.linux-on-laptops.com/. Information about laptops and handhelds can be found here.

SUSE maintains a mailing list in German dedicated to the subject of laptops. See http://lists.opensuse.org/opensuse-mobile-de/. On this list, users and developers discuss all aspects of mobile computing with SUSE Linux Enterprise Desktop. Postings in English are answered, but the majority of the archived information is only available in German. Use http://lists.opensuse.org/opensuse-mobile/ for English postings.

30 Using NetworkManager

  • Filename: nm.xml
  • ID: cha.nm

NetworkManager is the ideal solution for laptops and other portable computers. It supports state-of-the-art encryption types and standards for network connections, including connections to 802.1X protected networks. 802.1X is the IEEE Standard for Local and Metropolitan Area Networks—Port-Based Network Access Control. With NetworkManager, you need not worry about configuring network interfaces and switching between wired or wireless networks when you are moving. NetworkManager can automatically connect to known wireless networks or manage several network connections in parallel—the fastest connection is then used as default. Furthermore, you can manually switch between available networks and manage your network connection using an applet in the system tray.

Instead of only one connection being active, multiple connections may be active at once. This enables you to unplug your laptop from an Ethernet and remain connected via a wireless connection.

30.1 Use Cases for NetworkManager

NetworkManager provides a sophisticated and intuitive user interface, which enables users to easily switch their network environment. However, NetworkManager is not a suitable solution in the following cases:

  • Your computer provides network services for other computers in your network, for example, it is a DHCP or DNS server.

  • Your computer is a Xen server or your system is a virtual system inside Xen.

30.2 Enabling or Disabling NetworkManager

On laptop computers, NetworkManager is enabled by default. However, it can be at any time enabled or disabled in the YaST Network Settings module.

  1. Run YaST and go to System › Network Settings.

  2. The Network Settings dialog opens. Go to the Global Options tab.

  3. To configure and manage your network connections with NetworkManager:

    1. In the Network Setup Method field, select User Controlled with NetworkManager.

    2. Click OK and close YaST.

    3. Configure your network connections with NetworkManager as described in Section 30.3, “Configuring Network Connections”.

  4. To deactivate NetworkManager and control the network with your own configuration

    1. In the Network Setup Method field, choose Controlled by wicked.

    2. Click OK.

    3. Set up your network card with YaST using automatic configuration via DHCP or a static IP address.

      Find a detailed description of the network configuration with YaST in Section 17.4, “Configuring a Network Connection with YaST”.

30.3 Configuring Network Connections

After having enabled NetworkManager in YaST, configure your network connections with the NetworkManager front-end available in GNOME. It shows tabs for all types of network connections, such as wired, wireless, mobile broadband, DSL, and VPN connections.

To open the network configuration dialog in GNOME, open the settings menu via the status menu and click the Network entry.

Note
Note: Availability of Options

Depending on your system setup, you may not be allowed to configure connections. In a secured environment, some options may be locked or require root permission. Ask your system administrator for details.

GNOME Network Connections Dialog
Figure 30.1: GNOME Network Connections Dialog
Procedure 30.1: Adding and Editing Connections
  1. Open the NetworkManager configuration dialog.

  2. To add a Connection:

    1. Click the + icon in the lower left corner.

    2. Select your preferred connection type and follow the instructions.

    3. When you are finished click Add.

    4. After having confirmed your changes, the newly configured network connection appears in the list of available networks you get by opening the Status Menu.

  3. To edit a connection:

    1. Select the entry to edit.

    2. Click the gear icon to open the Connection Settings dialog.

    3. Insert your changes and click Apply to save them.

    4. To Make your connection available as system connection go to the Identity tab and set the check box Make available to other users. For more information about User and System Connections, see Section 30.4.1, “User and System Connections”.

30.3.1 Managing Wired Network Connections

If your computer is connected to a wired network, use the NetworkManager applet to manage the connection.

  1. Open the Status Menu and click Wired to change the connection details or to switch it off.

  2. To change the settings click Wired Settings and then click the gear icon.

  3. To switch off all network connections, activate the Airplane Mode setting.

30.3.2 Managing Wireless Network Connections

Visible wireless networks are listed in the GNOME NetworkManager applet menu under Wireless Networks. The signal strength of each network is also shown in the menu. Encrypted wireless networks are marked with a shield icon.

Procedure 30.2: Connecting to a visible Wireless Network
  1. To connect to a visible wireless network, open the Status Menu and click Wi-Fi.

  2. Click Turn On to enable it.

  3. Click Select Network, select your Wi-Fi Network and click Connect.

  4. If the network is encrypted, a configuration dialog opens. It shows the type of encryption the network uses and text boxes for entering the login credentials.

Procedure 30.3: Connecting to an Invisible Wireless Network
  1. To connect to a network that does not broadcast its service set identifier (SSID or ESSID) and therefore cannot be detected automatically, open the Status Menu and click Wi-Fi.

  2. Click Wi-Fi Settings to open the detailed settings menu.

  3. Make sure your Wi-Fi is enabled and click Connect to Hidden Network.

  4. In the dialog that opens, enter the SSID or ESSID in Network Name and set encryption parameters if necessary.

A wireless network that has been chosen explicitly will remain connected as long as possible. If a network cable is plugged in during that time, any connections that have been set to Stay connected when possible will be connected, while the wireless connection remains up.

30.3.3 Enabling Wireless Captive Portal Detection

On the initial connection, many public wireless hotspots force users to visit a landing page (the captive portal). Before you have logged in or agreed to the terms and conditions, all your HTTP requests are redirected to the provider's captive portal.

When connecting to a wireless network with a captive portal, NetworkManager and GNOME will automatically show the login page as part of the connection process. This ensures that you always know when you are connected, and helps you to get set up as quickly as possible without using the browser to login.

To enable this feature, install the package NetworkManager-branding-SLE and restart NetworkManager with:

tux > sudo systemctl restart network

Whenever you connect to a network with a captive portal, NetworkManager (or GNOME) will open the captive portal login page for you. Login with your credentials to get access to the Internet.

30.3.4 Configuring Your Wi-Fi/Bluetooth Card as an Access Point

If your Wi-Fi/Bluetooth card supports access point mode, you can use NetworkManager for the configuration.

  1. Open the Status Menu and click Wi-Fi.

  2. Click Wi-Fi Settings to open the detailed settings menu.

  3. Click Use as Hotspot and follow the instructions.

  4. Use the credentials shown in the resulting dialog to connect to the hotspot from a remote machine.

30.3.5 NetworkManager and VPN

NetworkManager supports several Virtual Private Network (VPN) technologies. For each technology, SUSE Linux Enterprise Desktop comes with a base package providing the generic support for NetworkManager. In addition to that, you also need to install the respective desktop-specific package for your applet.

OpenVPN

To use this VPN technology, install:

  • NetworkManager-openvpn

  • NetworkManager-openvpn-gnome

vpnc (Cisco AnyConnect)

To use this VPN technology, install:

  • NetworkManager-vpnc

  • NetworkManager-vpnc-gnome

PPTP (Point-to-Point Tunneling Protocol)

To use this VPN technology, install:

  • NetworkManager-pptp

  • NetworkManager-pptp-gnome

The following procedure describes how to set up your computer as an OpenVPN client using NetworkManager. Setting up other types of VPNs works analogously.

Before you begin, make sure that the package NetworkManager-openvpn-gnome is installed and all dependencies have been resolved.

Procedure 30.4: Setting Up OpenVPN with NetworkManager
  1. Open the application Settings by clicking the status icons at the right end of the panel and clicking the wrench and screwdriver icon. In the window All Settings, choose Network.

  2. Click the + icon.

  3. Select VPN and then OpenVPN.

  4. Choose the Authentication type. Depending on the setup of your OpenVPN server, choose Certificates (TLS) or Password with Certificates (TLS).

  5. Insert the necessary values into the respective text boxes. For our example configuration, these are:

    Gateway

    The remote endpoint of the VPN server

    User name

    The user (only available when you have selected Password with Certificates (TLS))

    Password

    The password for the user (only available when you have selected Password with Certificates (TLS))

    User Certificate

    /etc/openvpn/client1.crt

    CA Certificate

    /etc/openvpn/ca.crt

    Private Key

    /etc/openvpn/client1.key

  6. Finish the configuration with Add.

  7. To enable the connection, in the Network panel of the Settings application click the switch button. Alternatively, click the status icons at the right end of the panel, click the name of your VPN and then Connect.

30.4 NetworkManager and Security

NetworkManager distinguishes two types of wireless connections, trusted and untrusted. A trusted connection is any network that you explicitly selected in the past. All others are untrusted. Trusted connections are identified by the name and MAC address of the access point. Using the MAC address ensures that you cannot use a different access point with the name of your trusted connection.

NetworkManager periodically scans for available wireless networks. If multiple trusted networks are found, the most recently used is automatically selected. NetworkManager waits for your selection in case that all networks are untrusted.

If the encryption setting changes but the name and MAC address remain the same, NetworkManager attempts to connect, but first you are asked to confirm the new encryption settings and provide any updates, such as a new key.

If you switch from using a wireless connection to offline mode, NetworkManager blanks the SSID or ESSID. This ensures that the card is disconnected.

30.4.1 User and System Connections

NetworkManager knows two types of connections: user and system connections. User connections are connections that become available to NetworkManager when the first user logs in. Any required credentials are asked from the user and when the user logs out, the connections are disconnected and removed from NetworkManager. Connections that are defined as system connection can be shared by all users and are made available right after NetworkManager is started—before any users log in. In case of system connections, all credentials must be provided at the time the connection is created. Such system connections can be used to automatically connect to networks that require authorization. For information how to configure user or system connections with NetworkManager, refer to Section 30.3, “Configuring Network Connections”.

30.4.2 Storing Passwords and Credentials

If you do not want to re-enter your credentials each time you want to connect to an encrypted network, you can use the GNOME Keyring Manager to store your credentials encrypted on the disk, secured by a master password.

NetworkManager can also retrieve its certificates for secure connections (for example, encrypted wired, wireless or VPN connections) from the certificate store. For more information, refer to Chapter 12, Certificate Store.

30.5 Frequently Asked Questions

In the following, find some frequently asked questions about configuring special network options with NetworkManager.

How to tie a connection to a specific device?

By default, connections in NetworkManager are device type-specific: they apply to all physical devices with the same type. If more than one physical device per connection type is available (for example, your machine is equipped with two Ethernet cards), you can tie a connection to a certain device.

To do this in GNOME, first look up the MAC address of your device (use the Connection Information available from the applet, or use the output of command line tools like nm-tool or wicked show all). Then start the dialog for configuring network connections and choose the connection you want to modify. On the Wired or Wireless tab, enter the MAC Address of the device and confirm your changes.

How to specify a certain access point in case multiple access points with the same ESSID are detected?

When multiple access points with different wireless bands (a/b/g/n) are available, the access point with the strongest signal is automatically chosen by default. To override this, use the BSSID field when configuring wireless connections.

The Basic Service Set Identifier (BSSID) uniquely identifies each Basic Service Set. In an infrastructure Basic Service Set, the BSSID is the MAC address of the wireless access point. In an independent (ad-hoc) Basic Service Set, the BSSID is a locally administered MAC address generated from a 46-bit random number.

Start the dialog for configuring network connections as described in Section 30.3, “Configuring Network Connections”. Choose the wireless connection you want to modify and click Edit. On the Wireless tab, enter the BSSID.

How to share network connections to other computers?

The primary device (the device which is connected to the Internet) does not need any special configuration. However, you need to configure the device that is connected to the local hub or machine as follows:

  1. Start the dialog for configuring network connections as described in Section 30.3, “Configuring Network Connections”. Choose the connection you want to modify and click Edit. Switch to the IPv4 Settings tab and from the Method drop-down box, activate Shared to other computers. That will enable IP traffic forwarding and run a DHCP server on the device. Confirm your changes in NetworkManager.

  2. As the DCHP server uses port 67, make sure that it is not blocked by the firewall: On the machine sharing the connections, start YaST and select Security and Users › Firewall. Switch to the Allowed Services category. If DCHP Server is not already shown as Allowed Service, select DCHP Server from Services to Allow and click Add. Confirm your changes in YaST.

How to provide static DNS information with automatic (DHCP, PPP, VPN) addresses?

In case a DHCP server provides invalid DNS information (and/or routes), you can override it. Start the dialog for configuring network connections as described in Section 30.3, “Configuring Network Connections”. Choose the connection you want to modify and click Edit. Switch to the IPv4 Settings tab, and from the Method drop-down box, activate Automatic (DHCP) addresses only. Enter the DNS information in the DNS Servers and Search Domains fields. To Ignore automatically obtained routes click Routes and activate the respective check box. Confirm your changes.

How to make NetworkManager connect to password protected networks before a user logs in?

Define a system connection that can be used for such purposes. For more information, refer to Section 30.4.1, “User and System Connections”.

30.6 Troubleshooting

Connection problems can occur. Some common problems related to NetworkManager include the applet not starting or a missing VPN option. Methods for resolving and preventing these problems depend on the tool used.

NetworkManager Desktop Applet Does Not Start

The applets starts automatically if the network is set up for NetworkManager control. If the applet does not start, check if NetworkManager is enabled in YaST as described in Section 30.2, “Enabling or Disabling NetworkManager”. Then make sure that the NetworkManager-gnome package is also installed.

If the desktop applet is installed but is not running for some reason, start it manually. If the desktop applet is installed but is not running for some reason, start it manually with the command nm-applet.

NetworkManager Applet Does Not Include the VPN Option

Support for NetworkManager, applets, and VPN for NetworkManager is distributed in separate packages. If your NetworkManager applet does not include the VPN option, check if the packages with NetworkManager support for your VPN technology are installed. For more information, see Section 30.3.5, “NetworkManager and VPN”.

No Network Connection Available

If you have configured your network connection correctly and all other components for the network connection (router, etc.) are also up and running, it sometimes helps to restart the network interfaces on your computer. To do so, log in to a command line as root and run systemctl restart wickeds.

30.7 For More Information

More information about NetworkManager can be found on the following Web sites and directories:

NetworkManager Project Page

http://projects.gnome.org/NetworkManager/

Package Documentation

Also check out the information in the following directories for the latest information about NetworkManager and the GNOME applet:

  • /usr/share/doc/packages/NetworkManager/,

  • /usr/share/doc/packages/NetworkManager-gnome/.

31 Power Management

  • Filename: pcmcia_apm.xml
  • ID: cha.pmanage

Power management is especially important on laptop computers, but is also useful on other systems. ACPI (Advanced Configuration and Power Interface) is available on all modern computers (laptops, desktops, and servers). Power management technologies require suitable hardware and BIOS routines. Most laptops and many modern desktops and servers meet these requirements. It is also possible to control CPU frequency scaling to save power or decrease noise.

31.1 Power Saving Functions

Power saving functions are not only significant for the mobile use of laptops, but also for desktop systems. The main functions and their use in ACPI are:

Standby

not supported.

Suspend (to memory)

This mode writes the entire system state to the RAM. Subsequently, the entire system except the RAM is put to sleep. In this state, the computer consumes very little power. The advantage of this state is the possibility of resuming work at the same point within a few seconds without having to boot and restart applications. This function corresponds to the ACPI state S3.

Hibernation (suspend to disk)

In this operating mode, the entire system state is written to the hard disk and the system is powered off. There must be a swap partition at least as big as the RAM to write all the active data. Reactivation from this state takes about 30 to 90 seconds. The state prior to the suspend is restored. Some manufacturers offer useful hybrid variants of this mode, such as RediSafe in IBM Thinkpads. The corresponding ACPI state is S4. In Linux, suspend to disk is performed by kernel routines that are independent from ACPI.

Note
Note: Changed UUID for Swap Partitions when Formatting via mkswap

Do not reformat existing swap partitions with mkswap if possible. Reformatting with mkswap will change the UUID value of the swap partition. Either reformat via YaST (will update /etc/fstab) or adjust /etc/fstab manually.

Battery Monitor

ACPI checks the battery charge status and provides information about it. Additionally, it coordinates actions to perform when a critical charge status is reached.

Automatic Power-Off

Following a shutdown, the computer is powered off. This is especially important when an automatic shutdown is performed shortly before the battery is empty.

Processor Speed Control

In connection with the CPU, energy can be saved in three different ways: frequency and voltage scaling (also known as PowerNow! or Speedstep), throttling and putting the processor to sleep (C-states). Depending on the operating mode of the computer, these methods can also be combined.

31.2 Advanced Configuration and Power Interface (ACPI)

ACPI was designed to enable the operating system to set up and control the individual hardware components. ACPI supersedes both Power Management Plug and Play (PnP) and Advanced Power Management (APM). It delivers information about the battery, AC adapter, temperature, fan and system events, like close lid or battery low.

The BIOS provides tables containing information about the individual components and hardware access methods. The operating system uses this information for tasks like assigning interrupts or activating and deactivating components. Because the operating system executes commands stored in the BIOS, the functionality depends on the BIOS implementation. The tables ACPI can detect and load are reported in journald. See Chapter 16, journalctl: Query the systemd Journal for more information on viewing the journal log messages. See Section 31.2.2, “Troubleshooting” for more information about troubleshooting ACPI problems.

31.2.1 Controlling the CPU Performance

The CPU can save energy in three ways:

  • Frequency and Voltage Scaling

  • Throttling the Clock Frequency (T-states)

  • Putting the Processor to Sleep (C-states)

Depending on the operating mode of the computer, these methods can be combined. Saving energy also means that the system heats up less and the fans are activated less frequently.

Frequency scaling and throttling are only relevant if the processor is busy, because the most economic C-state is applied anyway when the processor is idle. If the CPU is busy, frequency scaling is the recommended power saving method. Often the processor only works with a partial load. In this case, it can be run with a lower frequency. Usually, dynamic frequency scaling controlled by the kernel on-demand governor is the best approach.

Throttling should be used as the last resort, for example, to extend the battery operation time despite a high system load. However, some systems do not run smoothly when they are throttled too much. Moreover, CPU throttling does not make sense if the CPU has little to do.

For in-depth information, refer to Chapter 11, Power Management.

31.2.2 Troubleshooting

There are two different types of problems. On one hand, the ACPI code of the kernel may contain bugs that were not detected in time. In this case, a solution will be made available for download. More often, the problems are caused by the BIOS. Sometimes, deviations from the ACPI specification are purposely integrated in the BIOS to circumvent errors in the ACPI implementation of other widespread operating systems. Hardware components that have serious errors in the ACPI implementation are recorded in a blacklist that prevents the Linux kernel from using ACPI for these components.

The first thing to do when problems are encountered is to update the BIOS. If the computer does not boot, one of the following boot parameters may be helpful:

pci=noacpi

Do not use ACPI for configuring the PCI devices.

acpi=ht

Only perform a simple resource configuration. Do not use ACPI for other purposes.

acpi=off

Disable ACPI.

Warning
Warning: Problems Booting without ACPI

Some newer machines (especially SMP systems and AMD64 systems) need ACPI for configuring the hardware correctly. On these machines, disabling ACPI can cause problems.

Sometimes, the machine is confused by hardware that is attached over USB or FireWire. If a machine refuses to boot, unplug all unneeded hardware and try again.

Monitor the boot messages of the system with the command dmesg -T | grep -2i acpi (or all messages, because the problem may not be caused by ACPI) after booting. If an error occurs while parsing an ACPI table, the most important table—the DSDT (Differentiated System Description Table)—can be replaced with an improved version. In this case, the faulty DSDT of the BIOS is ignored. The procedure is described in Section 31.4, “Troubleshooting”.

In the kernel configuration, there is a switch for activating ACPI debug messages. If a kernel with ACPI debugging is compiled and installed, detailed information is issued.

If you experience BIOS or hardware problems, it is always advisable to contact the manufacturers. Especially if they do not always provide assistance for Linux, they should be confronted with the problems. Manufacturers will only take the issue seriously if they realize that an adequate number of their customers use Linux.

31.2.2.1 For More Information

31.3 Rest for the Hard Disk

In Linux, the hard disk can be put to sleep entirely if it is not needed or it can be run in a more economic or quieter mode. On modern laptops, you do not need to switch off the hard disks manually, because they automatically enter an economic operating mode whenever they are not needed. However, if you want to maximize power savings, test some of the following methods, using the hdparm command.

It can be used to modify various hard disk settings. The option -y instantly switches the hard disk to the standby mode. -Y puts it to sleep. hdparm -S X causes the hard disk to be spun down after a certain period of inactivity. Replace X as follows: 0 disables this mechanism, causing the hard disk to run continuously. Values from 1 to 240 are multiplied by 5 seconds. Values from 241 to 251 correspond to 1 to 11 times 30 minutes.

Internal power saving options of the hard disk can be controlled with the option -B. Select a value from 0 to 255 for maximum saving to maximum throughput. The result depends on the hard disk used and is difficult to assess. To make a hard disk quieter, use the option -M. Select a value from 128 to 254 for quiet to fast.

Often, it is not so easy to put the hard disk to sleep. In Linux, numerous processes write to the hard disk, waking it up repeatedly. Therefore, it is important to understand how Linux handles data that needs to be written to the hard disk. First, all data is buffered in the RAM. This buffer is monitored by the pdflush daemon. When the data reaches a certain age limit or when the buffer is filled to a certain degree, the buffer content is flushed to the hard disk. The buffer size is dynamic and depends on the size of the memory and the system load. By default, pdflush is set to short intervals to achieve maximum data integrity. It checks the buffer every 5 seconds and writes the data to the hard disk. The following variables are interesting:

/proc/sys/vm/dirty_writeback_centisecs

Contains the delay until a pdflush thread wakes up (in hundredths of a second).

/proc/sys/vm/dirty_expire_centisecs

Defines after which timeframe a dirty page should be written out latest. Default is 3000, which means 30 seconds.

/proc/sys/vm/dirty_background_ratio

Maximum percentage of dirty pages until pdflush begins to write them. Default is 5%.

/proc/sys/vm/dirty_ratio

When the dirty page exceeds this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write.

Warning
Warning: Impairment of the Data Integrity

Changes to the pdflush daemon settings endanger the data integrity.

Apart from these processes, journaling file systems, like Btrfs, Ext3, Ext4 and others write their metadata independently from pdflush, which also prevents the hard disk from spinning down. To avoid this, a special kernel extension has been developed for mobile devices. To use the extension, install the laptop-mode-tools package and see /usr/src/linux/Documentation/laptops/laptop-mode.txt for details.

Another important factor is the way active programs behave. For example, good editors regularly write hidden backups of the currently modified file to the hard disk, causing the disk to wake up. Features like this can be disabled at the expense of data integrity.

In this connection, the mail daemon postfix uses the variable POSTFIX_LAPTOP. If this variable is set to yes, postfix accesses the hard disk far less frequently.

In SUSE Linux Enterprise Desktop these technologies are controlled by laptop-mode-tools.

31.4 Troubleshooting

All error messages and alerts are logged in the system journal that can be queried with the command journalctl (see Chapter 16, journalctl: Query the systemd Journal for more information). The following sections cover the most common problems.

31.4.1 CPU Frequency Does Not Work

Refer to the kernel sources to see if your processor is supported. You may need a special kernel module or module option to activate CPU frequency control. If the kernel-source package is installed, this information is available in /usr/src/linux/Documentation/cpu-freq/*.

31.5 For More Information

Part VI Troubleshooting

32 Help and Documentation

SUSE® Linux Enterprise Desktop comes with various sources of information and documentation, many of which are already integrated into your installed system.

33 Gathering System Information for Support

For a quick overview of all relevant system information of a machine, SUSE Linux Enterprise Desktop offers the hostinfo package. It also helps system administrators to check for tainted kernels (that are not supported) or any third-party packages installed on a machine.

In case of problems, a detailed system report may be created with either the supportconfig command line tool or the YaST Support module. Both will collect information about the system such as: current kernel version, hardware, installed packages, partition setup, and much more. The result is a TAR archive of files. After opening a Service Request (SR), you can upload the TAR archive to Global Technical Support. It will help to locate the issue you reported and to assist you in solving the problem.

Additionally, you can analyze the supportconfig output for known issues to help resolve problems faster. For this purpose, SUSE Linux Enterprise Desktop provides both an appliance and a command line tool for Supportconfig Analysis (SCA).

34 Common Problems and Their Solutions

This chapter describes a range of potential problems and their solutions. Even if your situation is not precisely listed here, there may be one similar enough to offer hints to the solution of your problem.

32 Help and Documentation

  • Filename: help_admin.xml
  • ID: cha.adminhelp

SUSE® Linux Enterprise Desktop comes with various sources of information and documentation, many of which are already integrated into your installed system.

Documentation in /usr/share/doc

This traditional help directory holds various documentation files and release notes for your system. It contains also information of installed packages in the subdirectory packages. Find more detailed information in Section 32.1, “Documentation Directory”.

Man Pages and Info Pages for Shell Commands

When working with the shell, you do not need to know the options of the commands by heart. Traditionally, the shell provides integrated help by means of man pages and info pages. Read more in Section 32.2, “Man Pages” and Section 32.3, “Info Pages”.

Desktop Help Center

The help center of the GNOME desktop (Help) provides central access to the most important documentation resources on your system in searchable form. These resources include online help for installed applications, man pages, info pages, and the SUSE manuals delivered with your product.

Separate Help Packages for Some Applications

When installing new software with YaST, the software documentation is usually installed automatically and appears in the help center of your desktop. However, some applications, such as GIMP, may have different online help packages that can be installed separately with YaST and do not integrate into the help centers.

32.1 Documentation Directory

The traditional directory to find documentation on your installed Linux system is /usr/share/doc. Usually, the directory contains information about the packages installed on your system, plus release notes, manuals, and more.

Note
Note: Contents Depends on Installed Packages

In the Linux world, many manuals and other kinds of documentation are available in the form of packages, like software. How much and which information you find in /usr/share/docs also depends on the (documentation) packages installed. If you cannot find the subdirectories mentioned here, check if the respective packages are installed on your system and add them with YaST, if needed.

32.1.1 SUSE Manuals

We provide HTML and PDF versions of our books in different languages. In the manual subdirectory, find HTML versions of most of the SUSE manuals available for your product. For an overview of all documentation available for your product refer to the preface of the manuals.

If more than one language is installed, /usr/share/doc/manual may contain different language versions of the manuals. The HTML versions of the SUSE manuals are also available in the help center of both desktops. For information on where to find the PDF and HTML versions of the books on your installation media, refer to the SUSE Linux Enterprise Desktop Release Notes. They are available on your installed system under /usr/share/doc/release-notes/ or online at your product-specific Web page at http://www.suse.com/releasenotes//.

32.1.2 Package Documentation

Under packages, find the documentation that is included in the software packages installed on your system. For every package, a subdirectory /usr/share/doc/packages/PACKAGENAME is created. It often contains README files for the package and sometimes examples, configuration files, or additional scripts. The following list introduces typical files to be found under /usr/share/doc/packages. None of these entries are mandatory and many packages might only include a few of them.

AUTHORS

List of the main developers.

BUGS

Known bugs or malfunctions. Might also contain a link to a Bugzilla Web page where you can search all bugs.

CHANGES , ChangeLog

Summary of changes from version to version. Usually interesting for developers, because it is very detailed.

COPYING , LICENSE

Licensing information.

FAQ

Question and answers collected from mailing lists or newsgroups.

INSTALL

How to install this package on your system. As the package is already installed by the time you get to read this file, you can safely ignore the contents of this file.

README, README.*

General information on the software. For example, for what purpose and how to use it.

TODO

Things that are not implemented yet, but probably will be in the future.

MANIFEST

List of files with a brief summary.

NEWS

Description of what is new in this version.

32.2 Man Pages

Man pages are an essential part of any Linux system. They explain the usage of a command and all available options and parameters. Man pages can be accessed with man followed by the name of the command, for example, man ls.

Man pages are displayed directly in the shell. To navigate them, move up and down with Page ↑ and Page ↓. Move between the beginning and the end of a document with Home and End. End this viewing mode by pressing Q. Learn more about the man command itself with man man. Man pages are sorted in categories as shown in Table 32.1, “Man Pages—Categories and Descriptions” (taken from the man page for man itself).

Table 32.1: Man Pages—Categories and Descriptions

Number

Description

1

Executable programs or shell commands

2

System calls (functions provided by the kernel)

3

Library calls (functions within program libraries)

4

Special files (usually found in /dev)

5

File formats and conventions (/etc/fstab)

6

Games

7

Miscellaneous (including macro packages and conventions), for example, man(7), groff(7)

8

System administration commands (usually only for root)

9

Kernel routines (nonstandard)

Each man page consists of several parts labeled NAME , SYNOPSIS , DESCRIPTION , SEE ALSO , LICENSING , and AUTHOR . There may be additional sections available depending on the type of command.

32.3 Info Pages

Info pages are another important source of information on your system. Usually, they are more detailed than man pages. They consist of more than command line options and contain sometimes whole tutorials or reference documentation. To view the info page for a certain command, enter info followed by the name of the command, for example, info ls. You can browse an info page with a viewer directly in the shell and display the different sections, called nodes. Use Space to move forward and <— to move backward. Within a node, you can also browse with Page ↑ and Page ↓ but only Space and <— will take you also to the previous or subsequent node. Press Q to end the viewing mode. Not every command comes with an info page and vice versa.

32.4 Online Resources

In addition to the online versions of the SUSE manuals installed under /usr/share/doc, you can also access the product-specific manuals and documentation on the Web. For an overview of all documentation available for SUSE Linux Enterprise Desktop check out your product-specific documentation Web page at http://www.suse.com/doc/.

If you are searching for additional product-related information, you can also refer to the following Web sites:

SUSE Technical Support

The SUSE Technical Support can be found at http://www.suse.com/support/ if you have questions or need solutions for technical problems.

SUSE Forums

There are several forums where you can dive in on discussions about SUSE products. See http://forums.suse.com/ for a list.

SUSE Conversations

An online community, which offers articles, tips, Q and A, and free tools to download: http://www.suse.com/communities/conversations/

GNOME Documentation

Documentation for GNOME users, administrators and developers is available at http://library.gnome.org/.

The Linux Documentation Project

The Linux Documentation Project (TLDP) is run by a team of volunteers who write Linux-related documentation (see http://www.tldp.org). It is probably the most comprehensive documentation resource for Linux. The set of documents contains tutorials for beginners, but is mainly focused on experienced users and professional system administrators. TLDP publishes HOWTOs, FAQs, and guides (handbooks) under a free license. Parts of the documentation from TLDP are also available on SUSE Linux Enterprise Desktop.

You can also try general-purpose search engines. For example, use the search terms Linux CD-RW help or OpenOffice file conversion problem if you have trouble with burning CDs or LibreOffice file conversion.

33 Gathering System Information for Support

  • Filename: adm_support.xml
  • ID: cha.adm.support
Abstract

For a quick overview of all relevant system information of a machine, SUSE Linux Enterprise Desktop offers the hostinfo package. It also helps system administrators to check for tainted kernels (that are not supported) or any third-party packages installed on a machine.

In case of problems, a detailed system report may be created with either the supportconfig command line tool or the YaST Support module. Both will collect information about the system such as: current kernel version, hardware, installed packages, partition setup, and much more. The result is a TAR archive of files. After opening a Service Request (SR), you can upload the TAR archive to Global Technical Support. It will help to locate the issue you reported and to assist you in solving the problem.

Additionally, you can analyze the supportconfig output for known issues to help resolve problems faster. For this purpose, SUSE Linux Enterprise Desktop provides both an appliance and a command line tool for Supportconfig Analysis (SCA).

33.1 Displaying Current System Information

For a quick and easy overview of all relevant system information when logging in to a server, use the package hostinfo. After it has been installed on a machine, the console displays the following information to any root user that logs in to this machine:

Example 33.1: Output of hostinfo When Logging In as root
Hostname:                 earth
Current As Of:            Wed 12 Mar 2014 03:57:05 PM CET
Distribution:             SUSE Linux Enterprise Server 12
 -Service Pack:           0
Architecture:             x86_64
Kernel Version:           3.12.12-3-default
 -Installed:              Mon 10 Mar 2014 03:15:05 PM CET
 -Status:                 Not Tainted
Last Updated Package:     Wed 12 Mar 2014 03:56:43 PM CET
 -Patches Needed:         0
 -Security:               0
 -3rd Party Packages:     0
IPv4 Address:             ens3 192.168.1.1
Total/Free/+Cache Memory: 983/95/383 MB (38% Free)
Hard Disk:                /dev/sda 10 GB

In case the output shows a tainted kernel status, see Section 33.6, “Support of Kernel Modules” for more details.

33.2 Collecting System Information with Supportconfig

To create a TAR archive with detailed system information that you can hand over to Global Technical Support, use either the supportconfig command line tool directly or the YaST Support module. The command line tool is provided by the package supportutils which is installed by default. The YaST Support module is also based on the command line tool.

33.2.1 Creating a Service Request Number

Supportconfig archives can be generated at any time. However, for handing over the supportconfig data to Global Technical Support, you need to generate a service request number first. You will need it to upload the archive to support.

To create a service request, go to https://scc.suse.com/support/requests and follow the instructions on the screen. Write down your 12-digit service request number.

Note
Note: Privacy Statement

SUSE and Micro Focus treat system reports as confidential data. For details about our privacy commitment, see https://www.suse.com/company/policies/privacy/.

33.2.2 Upload Targets

After having created a service request number, you can upload your supportconfig archives to Global Technical Support as described in Procedure 33.1, “Submitting Information to Support with YaST” or Procedure 33.2, “Submitting Information to Support from Command Line”. Use one of the following upload targets:

Alternatively, you can manually attach the TAR archive to your service request using the service request URL: https://scc.suse.com/support/requests.

33.2.3 Creating a Supportconfig Archive with YaST

To use YaST to gather your system information, proceed as follows:

  1. Start YaST and open the Support module.

  2. Click Create report tarball.

  3. In the next window, select one of the supportconfig options from the radio button list. Use Custom (Expert) Settings is preselected by default. If you want to test the report function first, use Only gather a minimum amount of info. For some background information on the other options, refer to the supportconfig man page.

    Proceed with Next.

  4. Enter your contact information. It will be written to a file called basic-environment.txt and included in the archive to be created.

  5. If you want to submit the archive to Global Technical Support at the end of the information collection process, Upload Information is required. YaST automatically proposes an upload server. If you want to modify it, refer to Section 33.2.2, “Upload Targets” for details of which upload servers are available.

    If you want to submit the archive later on, you can leave the Upload Information empty for now.

  6. Proceed with Next.

  7. The information gathering begins.

    After the process is finished, continue with Next.

  8. Review the data collection: Select the File Name of a log file to view its contents in YaST. To remove any files you want excluded from the TAR archive before submitting it to support, use Remove from Data. Continue with Next.

  9. Save the TAR archive. If you started the YaST module as root user, by default YaST proposes to save the archive to /var/log (otherwise, to your home directory). The file name format is nts_HOST_DATE_TIME.tbz.

  10. If you want to upload the archive to support directly, make sure Upload log files tarball to URL is activated. The Upload Target shown here is the one that YaST proposes in Step 5. If you want to modify the upload target, find detailed information of which upload servers are available in Section 33.2.2, “Upload Targets”.

  11. If you want to skip the upload, deactivate Upload log files tarball to URL.

  12. Confirm your changes to close the YaST module.

33.2.4 Creating a Supportconfig Archive from Command Line

The following procedure shows how to create a supportconfig archive, but without submitting it to support directly. For uploading it, you need to run the command with certain options as described in Procedure 33.2, “Submitting Information to Support from Command Line”.

  1. Open a shell and become root.

  2. Run supportconfig without any options. This gathers the default system information.

  3. Wait for the tool to complete the operation.

  4. The default archive location is /var/log, with the file name format being nts_HOST_DATE_TIME.tbz

33.2.5 Common Supportconfig Options

The supportconfig utility is usually called without any options. Display a list of all options with supportconfig -h or refer to the man page. The following list gives a brief overview of some common use cases:

Reducing the Size of the Information Being Gathered

Use the minimal option (-m):

supportconfig -m
Limiting the Information to a Specific Topic

If you have already localized a problem with the default supportconfig output and have found that it relates to a specific area or feature set only, you should limit the collected information to the specific area for the next supportconfig run. For example, if you detected problems with LVM and want to test a recent change that you did to the LVM configuration, it makes sense to gather the minimum supportconfig information around LVM only:

supportconfig -i LVM

For a complete list of feature keywords that you can use for limiting the collected information to a specific area, run

supportconfig -F
Including Additional Contact Information in the Output:
supportconfig -E tux@example.org -N "Tux Penguin" -O "Penguin Inc." ...

(all in one line)

Collecting Already Rotated Log Files
supportconfig -l

This is especially useful in high logging environments or after a kernel crash when syslog rotates the log files after a reboot.

33.3 Submitting Information to Global Technical Support

Use the YaST Support module or the supportconfig command line utility to submit system information to the Global Technical Support. When you experience a server issue and want the support's assistance, you will need to open a service request first. For details, see Section 33.2.1, “Creating a Service Request Number”.

The following examples use 12345678901 as a placeholder for your service request number. Replace 12345678901 with the service request number you created in Section 33.2.1, “Creating a Service Request Number”.

Procedure 33.1: Submitting Information to Support with YaST

The following procedure assumes that you have already created a supportconfig archive, but have not uploaded it yet. Make sure to have included your contact information in the archive as described in Section 33.2.3, “Creating a Supportconfig Archive with YaST”, Step 4. For instructions on how to generate and submit a supportconfig archive in one go, see Section 33.2.3, “Creating a Supportconfig Archive with YaST”.

  1. Start YaST and open the Support module.

  2. Click Upload.

  3. In Package with log files specify the path to the existing supportconfig archive or Browse for it.

  4. YaST automatically proposes an upload server. If you want to modify it, refer to Section 33.2.2, “Upload Targets” for details of which upload servers are available.

    Proceed with Next.

  5. Click Finish.

Procedure 33.2: Submitting Information to Support from Command Line

The following procedure assumes that you have already created a supportconfig archive, but have not uploaded it yet. For instructions on how to generate and submit a supportconfig archive in one go, see Section 33.2.3, “Creating a Supportconfig Archive with YaST”.

  1. Servers with Internet connectivity:

    1. To use the default upload target, run:

      supportconfig -ur 12345678901
    2. For the secure upload target, use the following:

      supportconfig -ar 12345678901
  2. Servers without Internet connectivity

    1. Run the following:

      supportconfig -r 12345678901
    2. Manually upload the /var/log/nts_SR12345678901*tbz archive to one of our FTP servers. Which one to use depends on your location in the world. For an overview, see Section 33.2.2, “Upload Targets”.

  3. After the TAR archive arrives in the incoming directory of our FTP server, it becomes automatically attached to your service request.

33.4 Analyzing System Information

System reports created with supportconfig can be analyzed for known issues to help resolve problems faster. For this purpose, SUSE Linux Enterprise Desktop provides both an appliance and a command line tool for Supportconfig Analysis (SCA). The SCA appliance is a server-side tool which is non-interactive. The SCA tool (scatool) runs on the client-side and is executed from command line. Both tools analyze supportconfig archives from affected servers. The initial server analysis takes place on the SCA appliance or the workstation on which scatool is running. No analysis cycles happen on the production server.

Both the appliance and the command line tool additionally need product-specific patterns that enable them to analyze the supportconfig output for the associated products. Each pattern is a script that parses and evaluates a supportconfig archive for one known issue. The patterns are available as RPM packages.

For example, if you want to analyze supportconfig archives that have been generated on a SUSE Linux Enterprise 11 machine, you need to install the package sca-patterns-sle11 together with the SCA tool (or on the machine that you want to use as the SCA appliance server). To analyze supportconfig archives generated on a SUSE Linux Enterprise 10 machine, you need the package sca-patterns-sle10.

You can also develop your own patterns as briefly described in Section 33.4.3, “Developing Custom Analysis Patterns”.

33.4.1 SCA Command Line Tool

The SCA command line tool lets you analyze a local machine using both supportconfig and the analysis patterns for the specific product that is installed on the local machine. The tool creates an HTML report showing its analysis results. For an example, see Figure 33.1, “HTML Report Generated by SCA Tool”.

HTML Report Generated by SCA Tool
Figure 33.1: HTML Report Generated by SCA Tool

The scatool command is provided by the sca-server-report package. It is not installed by default. Additionally, you need the sca-patterns-base package and any of the product-specific sca-patterns-* packages that matches the product installed on the machine where you want to run the scatool command.

Execute the scatool command either as root user or with sudo. When calling the SCA tool, you can either analyze an existing supportconfig TAR archive or you can let it generate and analyze a new archive in one go. The tool also provides an interactive console (with tab completion) and the possibility to run supportconfig on an external machine and to execute the subsequent analysis on the local machine.

Find some example commands below:

sudo scatool-s

Calls supportconfig and generates a new supportconfig archive on the local machine. Analyzes the archive for known issues by applying the SCA analysis patterns that match the installed product. Displays the path to the HTML report that is generated from the results of the analysis. It is usually written to the same directory where the supportconfig archive can be found.

sudo scatool -s -o /opt/sca/reports/ 

Same as sudo scatool -s, only that the HTML report is written to the path specified with -o.

sudo scatool -a PATH_TO_TARBALL_OR_DIR 

Analyzes the specified supportconfig archive file (or the specified directory to where the supportconfig archive has been extracted). The generated HTML report is saved in the same location as the supportconfig archive or directory.

sudo scatool -a SLES_SERVER.COMPANY.COM 

Establishes an SSH connection to an external server SLES_SERVER.COMPANY.COM and runs supportconfig on the server. The supportconfig archive is then copied back to the local machine and is analyzed there. The generated HTML report is saved to the default /var/log directory. (Only the supportconfig archive is created on SLES_SERVER.COMPANY.COM).

sudo scatool-c

Starts the interactive console for scatool. Press →| twice to see the available commands.

For further options and information, run sudo scatool -h or see the scatool man page.

33.4.2 SCA Appliance

If you decide to use the SCA appliance for analyzing the supportconfig archives, you need to configure a dedicated server (or virtual machine) as the SCA appliance server. The SCA appliance server can then be used to analyze supportconfig archives from all machines in your enterprise running SUSE Linux Enterprise Server or SUSE Linux Enterprise Desktop. You can simply upload supportconfig archives to the appliance server for analysis. Interaction is not required. In a MariaDB database, the SCA appliance keeps track of all supportconfig archives that have been analyzed . You can read the SCA reports directly from the appliance Web interface. Alternatively, you can have the appliance send the HTML report to any administrative user via e-mail. For details, see Section 33.4.2.5.4, “Sending SCA Reports via E-Mail”.

33.4.2.1 Installation Quick Start

To install and set up the SCA appliance in a very fast way from the command line, follow the instructions here. The procedure is intended for experts and focuses on the bare installation and setup commands. For more information, refer to the more detailed description in Section 33.4.2.2, “Prerequisites” to Section 33.4.2.3, “Installation and Basic Setup”.

Prerequisites
  • Web and LAMP Pattern

  • Web and Scripting Module (you must register the machine to be able to select this module).

Note
Note: root Privileges Required

All commands in the following procedure must be run as root.

Procedure 33.3: Installation Using Anonymous FTP for Upload

After the appliance is set up and running, no more manual interaction is required. This way of setting up the appliance is therefore ideal for using cron jobs to create and upload supportconfig archives.

  1. On the machine on which to install the appliance, log in to a console and execute the following commands:

    zypper install sca-appliance-* sca-patterns-* vsftpd
    systemctl enable apache2
    systemctl start apache2
    systemctl enable vsftpd
    systemctl start vsftpd
    yast ftp-server
  2. In YaST FTP Server, select Authentication › Enable Upload › Anonymous Can Upload › Finish › Yes to Create /srv/ftp/upload.

  3. Execute the following commands:

    systemctl enable mysql
    systemctl start mysql
    mysql_secure_installation
    setup-sca -f

    The mysql_secure_installation will create a MariaDB root password.

Procedure 33.4: Installation Using SCP/tmp for Upload

This way of setting up the appliance requires manual interaction when typing the SSH password.

  1. On the machine on which to install the appliance, log in to a console.

  2. Execute the following commands:

    zypper install sca-appliance-* sca-patterns-*
    systemctl enable apache2
    systemctl start apache2
    sudo systemctl enable mysql
    systemctl start mysql
    mysql_secure_installation
    setup-sca

33.4.2.2 Prerequisites

To run an SCA appliance server, you need the following prerequisites:

  • All sca-appliance-* packages.

  • The sca-patterns-base package. Additionally, any of the product-specific sca-patterns-* for the type of supportconfig archives that you want to analyze with the appliance.

  • Apache

  • PHP

  • MariaDB

  • anonymous FTP server (optional)

33.4.2.3 Installation and Basic Setup

As listed in Section 33.4.2.2, “Prerequisites”, the SCA appliance has several dependencies on other packages. Therefore you need do so some preparations before installing and setting up the SCA appliance server:

  1. For Apache and MariaDB, install the Web and LAMP installation patterns.

  2. Set up Apache, MariaDB, and optionally an anonymous FTP server.

  3. Configure Apache and MariaDB to start at boot time:

    sudo systemctl enable apache2 mysql
  4. Start both services:

    sudo systemctl start apache2 mysql

Now you can install the SCA appliance and set it up as described in Procedure 33.5, “Installing and Configuring the SCA Appliance”.

Procedure 33.5: Installing and Configuring the SCA Appliance

After installing the packages, use the setup-sca script for the basic configuration of the MariaDB administration and report database that is used by the SCA appliance.

It can be used to configure the following options you have for uploading the supportconfig archives from your machines to the SCA appliance:

  • scp

  • anonymous FTP server

  1. Install the appliance and the SCA base-pattern library:

    sudo zypper install sca-appliance-* sca-patterns-base
  2. Additionally, install the pattern packages for the types of supportconfig archives you want to analyze. For example, if you have SUSE Linux Enterprise Server 11 and SUSE Linux Enterprise Server 12 servers in your environment, install both the sca-patterns-sle11 and sca-patterns-sle12 packages.

    To install all available patterns:

    zypper install sca-patterns-*
  3. For basic setup of the SCA appliance, use the setup-sca script. How to call it depends on how you want to upload the supportconfig archives to the SCA appliance server:

    • If you have configured an anonymous FTP server that uses the /srv/ftp/upload directory, execute the setup script with the -f option and follow the instructions on the screen:

      setup-sca -f
      Note
      Note: FTP Server Using Another Directory

      If your FTP server uses another directory than /srv/ftp/upload, adjust the following configuration files first to make them point to the correct directory: /etc/sca/sdagent.conf and /etc/sca/sdbroker.conf .

    • If you want to upload supportconfig files to the /tmp directory of the SCA appliance server via scp, call the setup script without any parameters and follow the instructions on the screen:

      setup-sca

    The setup script runs a few checks regarding its requirements and configures the needed components. It will prompt you for two passwords: the MySQL root password of the MariaDB that you have set up, and a Web user password with which to log in to the Web interface of the SCA appliance.

  4. Enter the existing MariaDB root password. It will allow the SCA appliance to connect to the MariaDB.

  5. Define a password for the Web user. It will be written to /srv/www/htdocs/sca/web-config.php and will be set as the password for the user scdiag. Both user name and password can be changed at any time later, see Section 33.4.2.5.1, “Password for the Web Interface”.

After successful installation and setup, the SCA appliance is ready for use, see Section 33.4.2.4, “Using the SCA Appliance”. However, you should modify some options such as changing the password for the Web interface, changing the source for the SCA pattern updates, enabling archiving mode or configuring e-mail notifications. For details on that, see Section 33.4.2.5, “Customizing the SCA Appliance”.

Warning
Warning: Data Protection

As the reports on the SCA appliance server contain security-relevant information of the machines whose supportconfig archives have been analyzed, make sure to protect the data on the SCA appliance server against unauthorized access.

33.4.2.4 Using the SCA Appliance

You can upload existing supportconfig archives to the SCA appliance manually or create new supportconfig archives and upload them to the SCA appliance in one step. Uploading can be done via FTP or SCP. For both, you need to know the URL where the SCA appliance can be reached. For upload via FTP, an FTP server needs to be configured for the SCA appliance, see Procedure 33.5, “Installing and Configuring the SCA Appliance”.

33.4.2.4.1 Uploading Supportconfig Archives to the SCA Appliance
  • For creating a supportconfig archive and uploading it via (anonymous) FTP:

    sudo supportconfig -U “ftp://SCA-APPLIANCE.COMPANY.COM/upload”
  • For creating a supportconfig archive and uploading it via SCP:

    sudo supportconfig -U “scp://SCA-APPLIANCE.COMPANY.COM/tmp”

    You will be prompted for the root user password of the server running the SCA appliance.

  • If you want to manually upload one or multiple archives, copy the existing archive files (usually located at/var/log/nts_*.tbz) to the SCA appliance. As target, use either the appliance server's /tmp directory or the /srv/ftp/upload directory (if FTP is configured for the SCA appliance server).

33.4.2.4.2 Viewing SCA Reports

SCA reports can be viewed from any machine that has a browser installed and can access the report index page of the SCA appliance.

  1. Start a Web browser and make sure that JavaScript and cookies are enabled.

  2. As a URL, enter the report index page of the SCA appliance.

    https://sca-appliance.company.com/sca

    If in doubt, ask your system administrator.

  3. You will be prompted for a user name and a password to log in.

    HTML Report Generated by SCA Appliance
    Figure 33.2: HTML Report Generated by SCA Appliance
  4. After logging in, click the date of the report you want to read.

  5. Click the Basic Health category first to expand it.

  6. In the Message column, click an individual entry. This opens the corresponding article in the SUSE Knowledgebase. Read the proposed solution and follow the instructions.

  7. If the Solutions column of the Supportconfig Analysis Report shows any additional entries, click them. Read the proposed solution and follow the instructions.

  8. Check the SUSE Knowledgebase (http://www.suse.com/support/kb/) for results that directly relate to the problem identified by SCA. Work at resolving them.

  9. Check for results that can be addressed proactively to avoid future problems.

33.4.2.5 Customizing the SCA Appliance

The following sections show how to change the password for the Web interface, how to change the source for the SCA pattern updates, how to enable archiving mode, and how to configure e-mail notifications.

33.4.2.5.1 Password for the Web Interface

The SCA Appliance Web interface requires a user name and password for logging in. The default user name is scdiag and the default password is linux (if not specified otherwise, see Procedure 33.5, “Installing and Configuring the SCA Appliance”). Change the default password to a secure password at the earliest possibility. You can also modify the user name.

Procedure 33.6: Changing User Name or Password for the Web Interface
  1. Log in as root user at the system console of the SCA appliance server.

  2. Open /srv/www/htdocs/sca/web-config.php in an editor.

  3. Change the values of $username and $password as desired.

  4. Save the file and exit.

33.4.2.5.2 Updates of SCA Patterns

By default, all sca-patterns-* packages are updated regularly by a root cron job that executes the sdagent-patterns script nightly, which in turn runs zypper update sca-patterns-*. A regular system update will update all SCA appliance and pattern packages. To update the SCA appliance and patterns manually, run:

sudo zypper update sca-*

The updates are installed from the SUSE Linux Enterprise 12 SP3 update repository by default. You can change the source for the updates to an SMT server, if desired. When sdagent-patterns runs zypper update sca-patterns-*, it gets the updates from the currently configured update channel. If that channel is located on an SMT server, the packages will be pulled from there.

Procedure 33.7: Disabling Automatic Updates of SCA Patterns
  1. Log in as root user at the system console of the SCA appliance server.

  2. Open /etc/sca/sdagent-patterns.conf in an editor.

  3. Change the entry

    UPDATE_FROM_PATTERN_REPO=1

    to

    UPDATE_FROM_PATTERN_REPO=0
  4. Save the file and exit. The machine does not require any restart to apply the change.

33.4.2.5.3 Archiving Mode

All supportconfig archives are deleted from the SCA appliance after they have been analyzed and their results have been stored in the MariaDB database. However, for troubleshooting purposes it can be useful to keep copies of supportconfig archives from a machine. By default, archiving mode is disabled.

Procedure 33.8: Enabling Archiving Mode in the SCA Appliance
  1. Log in as root user at the system console of the SCA appliance server.

  2. Open /etc/sca/sdagent.conf in an editor.

  3. Change the entry

    ARCHIVE_MODE=0

    to

    ARCHIVE_MODE=1
  4. Save the file and exit. The machine does not require any restart to apply the change.

After having enabled archive mode, the SCA appliance will save the supportconfig files to the /var/log/archives/saved directory, instead of deleting them.

33.4.2.5.4 Sending SCA Reports via E-Mail

The SCA appliance can e-mail a report HTML file for each supportconfig analyzed. This feature is disabled by default. When enabling it, you can define a list of e-mail addresses to which the reports should be sent, and define a level of status messages that trigger the sending of reports (STATUS_NOTIFY_LEVEL).

Possible Values for STATUS_NOTIFY_LEVEL
$STATUS_OFF

Deactivate sending of HTML reports.

$STATUS_CRITICAL

Send only SCA reports that include a CRITICAL.

$STATUS_WARNING

Send only SCA reports that include a WARNING or CRITICAL.

$STATUS_RECOMMEND

Send only SCA reports that include a RECOMMEND, WARNING or CRITICAL.

$STATUS_SUCCESS

Send SCA reports that include a SUCCESS, RECOMMEND, WARNING or CRITICAL.

Procedure 33.9: Configuring E-Mail Notifications for SCA Reports
  1. Log in as root user at the system console of the SCA appliance server.

  2. Open /etc/sca/sdagent.conf in an editor.

  3. Search for the entry STATUS_NOTIFY_LEVEL. By default, it is set to $STATUS_OFF (e-mail notifications are disabled).

  4. To enable e-mail notifications, change $STATUS_OFF to the level of status messages that you want to have e-mail reports for, for example:

    STATUS_NOTIFY_LEVEL=$STATUS_SUCCESS

    For details, see Possible Values for STATUS_NOTIFY_LEVEL.

  5. To define the list of recipients to which the reports should be sent:

    1. Search for the entry EMAIL_REPORT='root'.

    2. Replace root with a list of e-mail addresses to which SCA reports should be sent. The e-mail addresses must be separated by spaces. For example:

      EMAIL_REPORT='tux@my.company.com wilber@your.company.com'
  6. Save the file and exit. The machine does not require any restart to apply the changes. All future SCA reports will be e-mailed to the specified addresses.

33.4.2.6 Backing Up and Restoring the Database

To back up and restore the MariaDB database that stores the SCA reports, use the scadb command as described below.

Procedure 33.10: Backing Up the Database
  1. Log in as root user at the system console of the server running the SCA appliance.

  2. Put the appliance into maintenance mode by executing:

    scadb maint
  3. Start the backup with:

    scadb backup

    The data is saved to a TAR archive: sca-backup-*sql.gz.

  4. If you are using the pattern creation database to develop your own patterns (see Section 33.4.3, “Developing Custom Analysis Patterns”), back up this data, too:

    sdpdb backup

    The data is saved to a TAR archive: sdp-backup-*sql.gz.

  5. Copy the following data to another machine or an external storage medium:

    • sca-backup-*sql.gz

    • sdp-backup-*sql.gz

    • /usr/lib/sca/patterns/local (only needed if you have created custom patterns)

  6. Reactivate the SCA appliance with:

    scadb reset agents
Procedure 33.11: Restoring the Database

To restore the database from your backup, proceed as follows:

  1. Log in as root user at the system console of the server running the SCA appliance.

  2. Copy the newest sca-backup-*sql.gz and sdp-backup-*sql.gz TAR archives to the SCA appliance server.

  3. To decompress the files, run:

    gzip -d *-backup-*sql.gz
  4. To import the data into the database, execute:

    scadb import sca-backup-*sql
  5. If you are using the pattern creation database to create your own patterns, also import the following data with:

    sdpdb import sdp-backup-*sql
  6. If you are using custom patterns, also restore /usr/lib/sca/patterns/local from your backup data.

  7. Reactivate the SCA appliance with:

    scadb reset agents
  8. Update the pattern modules in the database with:

    sdagent-patterns -u

33.4.3 Developing Custom Analysis Patterns

The SCA appliance comes with a complete pattern development environment (the SCA Pattern Database) that enables you to develop your own, custom patterns. Patterns can be written in any programming language. To make them available for the supportconfig analysis process, they need to be saved to /usr/lib/sca/patterns/local and to be made executable. Both the SCA appliance and the SCA tool will then run the custom patterns against new supportconfig archives as part of the analysis report. For detailed instructions on how to create (and test) your own patterns, see http://www.suse.com/communities/conversations/sca-pattern-development/.

33.5 Gathering Information during the Installation

During the installation, supportconfig is not available. However, you can collect log files from YaST by using save_y2logs. This command will create a .tar.xz archive in the directory /tmp.

If issues appear very early during installation, you may be able to gather information from the log file created by linuxrc. linuxrc is a small command that runs before YaST starts. This log file is available at /var/log/linuxrc.log.

Important
Important: Installation Log Files Not Available in the Installed System

The log files available during the installation are not available in the installed system anymore. Properly save the installation log files while the installer is still running.

33.6 Support of Kernel Modules

An important requirement for every enterprise operating system is the level of support you receive for your environment. Kernel modules are the most relevant connector between hardware (controllers) and the operating system. Every kernel module in SUSE Linux Enterprise has a supported flag that can take three possible values:

  • yes, thus supported

  • external, thus supported

  • (empty, not set), thus unsupported

The following rules apply:

  • All modules of a self-recompiled kernel are by default marked as unsupported.

  • Kernel modules supported by SUSE partners and delivered using SUSE SolidDriver Program are marked external.

  • If the supported flag is not set, loading this module will taint the kernel. Tainted kernels are not supported. Unsupported Kernel modules are included in an extra RPM package (kernel-FLAVOR-extra) that is only available for SUSE Linux Enterprise Desktop and the SUSE Linux Enterprise Workstation Extension. Those kernels will not be loaded by default (FLAVOR=default|xen|...). In addition, these unsupported modules are not available in the installer, and the kernel-FLAVOR-extra package is not part of the SUSE Linux Enterprise media.

  • Kernel modules not provided under a license compatible to the license of the Linux kernel will also taint the kernel. For details, see /usr/src/linux/Documentation/sysctl/kernel.txt and the state of /proc/sys/kernel/tainted.

33.6.1 Technical Background

  • Linux kernel: The value of /proc/sys/kernel/unsupported defaults to 2 on SUSE Linux Enterprise 12 SP3 (do not warn in syslog when loading unsupported modules). This default is used in the installer and in the installed system. See /usr/src/linux/Documentation/sysctl/kernel.txt for more information.

  • modprobe: The modprobe utility for checking module dependencies and loading modules appropriately checks for the value of the supported flag. If the value is yes or external the module will be loaded, otherwise it will not. For information on how to override this behavior, see Section 33.6.2, “Working with Unsupported Modules”.

    Note
    Note: Support

    SUSE does not generally support the removal of storage modules via modprobe -r.

33.6.2 Working with Unsupported Modules

While general supportability is important, situations can occur where loading an unsupported module is required (for example, for testing or debugging purposes, or if your hardware vendor provides a hotfix).

  • To override the default, edit /etc/modprobe.d/10-unsupported-modules.conf and change the value of the variable allow_unsupported_modules to 1. If an unsupported module is needed in the initrd, do not forget to run dracut -f to update the initrd.

    If you only want to try loading a module once, you can use the --allow-unsupported-modules option with modprobe. For more information, see the modprobe man page.

  • During installation, unsupported modules may be added through driver update disks, and they will be loaded. To enforce loading of unsupported modules during boot and afterward, use the kernel command line option oem-modules. While installing and initializing the suse-module-tools package, the kernel flag TAINT_NO_SUPPORT (/proc/sys/kernel/tainted) will be evaluated. If the kernel is already tainted, allow_unsupported_modules will be enabled. This will prevent unsupported modules from failing in the system being installed. If no unsupported modules are present during installation and the other special kernel command line option (oem-modules=1) is not used, the default still is to disallow unsupported modules.

Remember that loading and running unsupported modules will make the kernel and the whole system unsupported by SUSE.

33.7 For More Information

34 Common Problems and Their Solutions

  • Filename: troubleshooting.xml
  • ID: cha.trouble

This chapter describes a range of potential problems and their solutions. Even if your situation is not precisely listed here, there may be one similar enough to offer hints to the solution of your problem.

34.1 Finding and Gathering Information

Linux reports things in a very detailed way. There are several places to look when you encounter problems with your system, most of which are standard to Linux systems in general, and some are relevant to SUSE Linux Enterprise Desktop systems. Most log files can be viewed with YaST (Miscellaneous › Start-Up Log).

YaST offers the possibility to collect all system information needed by the support team. Use Other › Support and select the problem category. When all information is gathered, attach it to your support request.

A list of the most frequently checked log files follows with the description of their typical purpose. Paths containing ~ refer to the current user's home directory.

Table 34.1: Log Files

Log File

Description

~/.xsession-errors

Messages from the desktop applications currently running.

/var/log/apparmor/

Log files from AppArmor, see Part IV, “Confining Privileges with AppArmor for detailed information.

/var/log/audit/audit.log

Log file from Audit to track any access to files, directories, or resources of your system, and trace system calls. See Part V, “The Linux Audit Framework for detailed information.

/var/log/mail.*

Messages from the mail system.

/var/log/NetworkManager

Log file from NetworkManager to collect problems with network connectivity

/var/log/samba/

Directory containing Samba server and client log messages.

/var/log/warn

All messages from the kernel and system log daemon with the warning level or higher.

/var/log/wtmp

Binary file containing user login records for the current machine session. View it with last.

/var/log/Xorg.*.log

Various start-up and runtime log files from the X Window System. It is useful for debugging failed X start-ups.

/var/log/YaST2/

Directory containing YaST's actions and their results.

/var/log/zypper.log

Log file of Zypper.

Apart from log files, your machine also supplies you with information about the running system. See Table 34.2: System Information With the /proc File System

Table 34.2: System Information With the /proc File System

File

Description

/proc/cpuinfo

Contains processor information, including its type, make, model, and performance.

/proc/dma

Shows which DMA channels are currently being used.

/proc/interrupts

Shows which interrupts are in use, and how many of each have been in use.

/proc/iomem

Displays the status of I/O (input/output) memory.

/proc/ioports

Shows which I/O ports are in use at the moment.

/proc/meminfo

Displays memory status.

/proc/modules

Displays the individual modules.

/proc/mounts

Displays devices currently mounted.

/proc/partitions

Shows the partitioning of all hard disks.

/proc/version

Displays the current version of Linux.

Apart from the /proc file system, the Linux kernel exports information with the sysfs module, an in-memory file system. This module represents kernel objects, their attributes and relationships. For more information about sysfs, see the context of udev in Chapter 22, Dynamic Kernel Device Management with udev. Table 34.3 contains an overview of the most common directories under /sys.

Table 34.3: System Information With the /sys File System

File

Description

/sys/block

Contains subdirectories for each block device discovered in the system. Generally, these are mostly disk type devices.

/sys/bus

Contains subdirectories for each physical bus type.

/sys/class

Contains subdirectories grouped together as a functional types of devices (like graphics, net, printer, etc.)

/sys/device

Contains the global device hierarchy.

Linux comes with several tools for system analysis and monitoring. See Chapter 2, System Monitoring Utilities for a selection of the most important ones used in system diagnostics.

Each of the following scenarios begins with a header describing the problem followed by a paragraph or two offering suggested solutions, available references for more detailed solutions, and cross-references to other scenarios that are related.

34.2 Installation Problems

Installation problems are situations when a machine fails to install. It may fail entirely or it may not be able to start the graphical installer. This section highlights some typical problems you may run into, and offers possible solutions or workarounds for these kinds of situations.

34.2.1 Checking Media

If you encounter any problems using the SUSE Linux Enterprise Desktop installation media, check the integrity of your installation media. Boot from the media and choose Check Installation Media from the boot menu. In a running system, start YaST and choose Software › Media Check. To check the SUSE Linux Enterprise Desktop medium, insert it into the drive and click Start Check in the Media Check screen of YaST. This may take several minutes. If errors are detected, do not use this medium for installation. Media problems may occur when having burned the medium yourself. Burning the media at a low speed (4x) helps to avoid problems.

Checking Media
Figure 34.1: Checking Media

34.2.2 No Bootable DVD Drive Available

If your computer does not contain a bootable DVD-ROM drive or if the one you have is not supported by Linux, there are several options you can install your machine without a built-in DVD drive:

Using an External Boot Device

If it is supported by your BIOS and the installation kernel, boot from external DVD drives or USB storage devices. Refer to Section 3.2.1, “PC (AMD64/Intel 64/ARM AArch64): System Start-up” for instructions on how to create a bootable USB storage device.

Network Boot via PXE

If a machine lacks a DVD drive, but provides a working Ethernet connection, perform a completely network-based installation. See Section 7.1.3, “Remote Installation via VNC—PXE Boot and Wake on LAN” and Section 7.1.6, “Remote Installation via SSH—PXE Boot and Wake on LAN” for details.

34.2.2.1 External Boot Devices

Linux supports most existing DVD drives. If the system has no DVD drive, it is still possible that an external DVD drive, connected through USB, FireWire, or SCSI, can be used to boot the system. This depends mainly on the interaction of the BIOS and the hardware used. Sometimes a BIOS update may help if you encounter problems.

When installing from a Live CD, you can also create a Live flash disk to boot from.

34.2.3 Booting from Installation Media Fails

One reason a machine does not boot the installation media can be an incorrect boot sequence setting in BIOS. The BIOS boot sequence must have DVD drive set as the first entry for booting. Otherwise the machine would try to boot from another medium, typically the hard disk. Guidance for changing the BIOS boot sequence can be found the documentation provided with your mainboard, or in the following paragraphs.

The BIOS is the software that enables the very basic functions of a computer. Motherboard vendors provide a BIOS specifically made for their hardware. Normally, the BIOS setup can only be accessed at a specific time—when the machine is booting. During this initialization phase, the machine performs several diagnostic hardware tests. One of them is a memory check, indicated by a memory counter. When the counter appears, look for a line, usually below the counter or somewhere at the bottom, mentioning the key to press to access the BIOS setup. Usually the key to press is one of Del, F1, or Esc. Press this key until the BIOS setup screen appears.

Procedure 34.1: Changing the BIOS Boot Sequence
  1. Enter the BIOS using the proper key as announced by the boot routines and wait for the BIOS screen to appear.

  2. To change the boot sequence in an AWARD BIOS, look for the BIOS FEATURES SETUP entry. Other manufacturers may have a different name for this, such as ADVANCED CMOS SETUP. When you have found the entry, select it and confirm with Enter.

  3. In the screen that opens, look for a subentry called BOOT SEQUENCE or BOOT ORDER. Change the settings by pressing Page ↑ or Page ↓ until the DVD drive is listed first.

  4. Leave the BIOS setup screen by pressing Esc. To save the changes, select SAVE & EXIT SETUP, or press F10. To confirm that your settings should be saved, press Y.

Procedure 34.2: Changing the Boot Sequence in an SCSI BIOS (Adaptec Host Adapter)
  1. Open the setup by pressing CtrlA.

  2. Select Disk Utilities. The connected hardware components are now displayed.

    Make note of the SCSI ID of your DVD drive.

  3. Exit the menu with Esc.

  4. Open Configure Adapter Settings. Under Additional Options, select Boot Device Options and press Enter.

  5. Enter the ID of the DVD drive and press Enter again.

  6. Press Esc twice to return to the start screen of the SCSI BIOS.

  7. Exit this screen and confirm with Yes to boot the computer.

Regardless of what language and keyboard layout your final installation will be using, most BIOS configurations use the US keyboard layout as shown in the following figure:

US Keyboard Layout
Figure 34.2: US Keyboard Layout

34.2.4 Fails to Boot

Some hardware types, mainly very old or very recent ones, fail to install. Often this may happen because support for this type of hardware is missing in the installation kernel, or because of certain functionality included in this kernel, such as ACPI, that can still cause problems on some hardware.

If your system fails to install using the standard Installation mode from the first installation boot screen, try the following:

  1. With the DVD still in the drive, reboot the machine with CtrlAltDel or using the hardware reset button.

  2. When the boot screen appears, press F5, use the arrow keys of your keyboard to navigate to No ACPI and press Enter to launch the boot and installation process. This option disables the support for ACPI power management techniques.

  3. Proceed with the installation as described in Chapter 3, Installation with YaST.

If this fails, proceed as above, but choose Safe Settings instead. This option disables ACPI and DMA support. Most hardware will boot with this option.

If both of these options fail, use the boot options prompt to pass any additional parameters needed to support this type of hardware to the installation kernel. For more information about the parameters available as boot options, refer to the kernel documentation located in /usr/src/linux/Documentation/kernel-parameters.txt.

Tip
Tip: Obtaining Kernel Documentation

Install the kernel-source package to view the kernel documentation.

There are other ACPI-related kernel parameters that can be entered at the boot prompt prior to booting for installation:

acpi=off

This parameter disables the complete ACPI subsystem on your computer. This may be useful if your computer cannot handle ACPI or if you think ACPI in your computer causes trouble.

acpi=force

Always enable ACPI even if your computer has an old BIOS dated before the year 2000. This parameter also enables ACPI if it is set in addition to acpi=off.

acpi=noirq

Do not use ACPI for IRQ routing.

acpi=ht

Run only enough ACPI to enable hyper-threading.

acpi=strict

Be less tolerant of platforms that are not strictly ACPI specification compliant.

pci=noacpi

Disable PCI IRQ routing of the new ACPI system.

pnpacpi=off

This option is for serial or parallel problems when your BIOS setup contains wrong interrupts or ports.

notsc

Disable the time stamp counter. This option can be used to work around timing problems on your systems. It is a recent feature, if you see regressions on your machine, especially time related or even total hangs, this option is worth a try.

nohz=off

Disable the nohz feature. If your machine hangs, this option may help. Otherwise it is of no use.

Once you have determined the right parameter combination, YaST automatically writes them to the boot loader configuration to make sure that the system boots properly next time.

If unexplainable errors occur when the kernel is loaded or during the installation, select Memory Test in the boot menu to check the memory. If Memory Test returns an error, it is usually a hardware error.

34.2.5 Fails to Launch Graphical Installer

After you insert the medium into your drive and reboot your machine, the installation screen comes up, but after you select Installation, the graphical installer does not start.

There are several ways to deal with this situation:

  • Try to select another screen resolution for the installation dialogs.

  • Select Text Mode for installation.

  • Do a remote installation via VNC using the graphical installer.

Procedure 34.3: Change Screen Resolution for Installation
  1. Boot for installation.

  2. Press F3 to open a menu from which to select a lower resolution for installation purposes.

  3. Select Installation and proceed with the installation as described in Chapter 3, Installation with YaST.

Procedure 34.4: Installation in Text Mode
  1. Boot for installation.

  2. Press F3 and select Text Mode.

  3. Select Installation and proceed with the installation as described in Chapter 3, Installation with YaST.

Procedure 34.5: VNC Installation
  1. Boot for installation.

  2. Enter the following text at the boot options prompt:

    vnc=1 vncpassword=SOME_PASSWORD

    Replace SOME_PASSWORD with the password to use for VNC installation.

  3. Select Installation then press Enter to start the installation.

    Instead of starting right into the graphical installation routine, the system continues to run in a text mode, then halts, displaying a message containing the IP address and port number at which the installer can be reached via a browser interface or a VNC viewer application.

  4. If using a browser to access the installer, launch the browser and enter the address information provided by the installation routines on the future SUSE Linux Enterprise Desktop machine and press Enter:

    http://IP_ADDRESS_OF_MACHINE:5801

    A dialog opens in the browser window prompting you for the VNC password. Enter it and proceed with the installation as described in Chapter 3, Installation with YaST.

    Important
    Important: Cross-platform Support

    Installation via VNC works with any browser under any operating system, provided Java support is enabled.

    Provide the IP address and password to your VNC viewer when prompted. A window opens, displaying the installation dialogs. Proceed with the installation as usual.

34.2.6 Only Minimalistic Boot Screen Started

You inserted the medium into the drive, the BIOS routines are finished, but the system does not start with the graphical boot screen. Instead it launches a very minimalistic text-based interface. This may happen on any machine not providing sufficient graphics memory for rendering a graphical boot screen.

Although the text boot screen looks minimalistic, it provides nearly the same functionality as the graphical one:

Boot Options

Unlike the graphical interface, the different boot options cannot be selected using the cursor keys of your keyboard. The boot menu of the text mode boot screen offers some keywords to enter at the boot prompt. These keywords map to the options offered in the graphical version. Enter your choice and press Enter to launch the boot process.

Custom Boot Options

After selecting a boot option, enter the appropriate keyword at the boot prompt or enter some custom boot options as described in Section 34.2.4, “Fails to Boot”. To launch the installation process, press Enter.

Screen Resolutions

Use the function keys (F1 ... F12) to determine the screen resolution for installation. If you need to boot in text mode, choose F3.

34.3 Boot Problems

Boot problems are situations when your system does not boot properly (does not boot to the expected target and login screen).

34.3.1 The GRUB 2 Boot Loader Fails to Load

If the hardware is functioning properly, it is possible that the boot loader is corrupted and Linux cannot start on the machine. In this case, it is necessary to repair the boot loader. To do so, you need to start the Rescue System as described in Section 34.6.2, “Using the Rescue System” and follow the instructions in Section 34.6.2.4, “Modifying and Re-installing the Boot Loader”.

Alternatively, you can use the Rescue System to fix the boot loader as follows. Boot your machine from the installation media. In the boot screen, choose More › Boot Linux System. Select the disk containing the installed system and kernel with the default kernel options.

When the system is booted, start YaST and switch to System › Boot Loader. Make sure that the Write generic Boot Code to MRB option is enabled, and press OK. This fixes the corrupted boot loader by overwriting it, or installs the boot loader if it is missing.

Other reasons for the machine not booting may be BIOS-related:

BIOS Settings

Check your BIOS for references to your hard disk. GRUB 2 may simply not be started if the hard disk itself cannot be found with the current BIOS settings.

BIOS Boot Order

Check whether your system's boot order includes the hard disk. If the hard disk option was not enabled, your system may install properly, but fails to boot when access to the hard disk is required.

34.3.2 No Login or Prompt Appears

This behavior typically occurs after a failed kernel upgrade and it is known as a kernel panic because of the type of error on the system console that sometimes can be seen at the final stage of the process. If, in fact, the machine has just been rebooted following a software update, the immediate goal is to reboot it using the old, proven version of the Linux kernel and associated files. This can be done in the GRUB 2 boot loader screen during the boot process as follows:

  1. Reboot the computer using the reset button, or switch it off and on again.

  2. When the GRUB 2 boot screen becomes visible, select the Advanced Options entry and choose the previous kernel from the menu. The machine will boot using the prior version of the kernel and its associated files.

  3. After the boot process has completed, remove the newly installed kernel and, if necessary, set the default boot entry to the old kernel using the YaST Boot Loader module. For more information refer to Section 13.3, “Configuring the Boot Loader with YaST”. However, doing this is probably not necessary because automated update tools normally modify it for you during the rollback process.

  4. Reboot.

If this does not fix the problem, boot the computer using the installation media. After the machine has booted, continue with Step 3.

34.3.3 No Graphical Login

If the machine starts, but does not boot into the graphical login manager, anticipate problems either with the choice of the default systemd target or the configuration of the X Window System. To check the current systemd default target run the command sudo systemctl get-default. If the value returned is not graphical.target, run the command sudo systemctl isolate graphical.target. If the graphical login screen starts, log in and start YaST › System › Services Manager and set the Default System Target to Graphical Interface. From now on the system should boot into the graphical login screen.

If the graphical login screen does not start even if having booted or switched to the graphical target, your desktop or X Window software is probably misconfigured or corrupted. Examine the log files at /var/log/Xorg.*.log for detailed messages from the X server as it attempted to start. If the desktop fails during start, it may log error messages to the system journal that can be queried with the command journalctl (see Chapter 16, journalctl: Query the systemd Journal for more information). If these error messages hint at a configuration problem in the X server, try to fix these issues. If the graphical system still does not come up, consider reinstalling the graphical desktop.

34.3.4 Root Btrfs Partition Cannot Be Mounted

If a btrfs root partition becomes corrupted, try the following options:

  • Mount the partition with the -o recovery option.

  • If that fails, run btrfs-zero-log on your root partition.

34.3.5 Force Checking Root Partitions

If the root partition becomes corrupted, use the parameter forcefsck on the boot prompt. This passes the option -f (force) to the fsck command.

34.4 Login Problems

Login problems occur when your machine does boot to the expected welcome screen or login prompt, but refuses to accept the user name and password, or accepts them but then does not behave properly (fails to start the graphic desktop, produces errors, drops to a command line, etc.).

34.4.1 Valid User Name and Password Combinations Fail

This usually occurs when the system is configured to use network authentication or directory services and, for some reason, cannot retrieve results from its configured servers. The root user, as the only local user, is the only user that can still log in to these machines. The following are some common reasons a machine appears functional but cannot process logins correctly:

  • The network is not working. For further directions on this, turn to Section 34.5, “Network Problems”.

  • DNS is not working at the moment (which prevents GNOME from working and the system from making validated requests to secure servers). One indication that this is the case is that the machine takes an extremely long time to respond to any action. Find more information about this topic in Section 34.5, “Network Problems”.

  • If the system is configured to use Kerberos, the system's local time may have drifted past the accepted variance with the Kerberos server time (this is typically 300 seconds). If NTP (network time protocol) is not working properly or local NTP servers are not working, Kerberos authentication ceases to function because it depends on common clock synchronization across the network.

  • The system's authentication configuration is misconfigured. Check the PAM configuration files involved for any typographical errors or misordering of directives. For additional background information about PAM and the syntax of the configuration files involved, refer to Chapter 2, Authentication with PAM.

  • The home partition is encrypted. Find more information about this topic in Section 34.4.3, “Login to Encrypted Home Partition Fails”.

In all cases that do not involve external network problems, the solution is to reboot the system into single-user mode and repair the configuration before booting again into operating mode and attempting to log in again. To boot into single-user mode:

  1. Reboot the system. The boot screen appears, offering a prompt.

  2. Press Esc to exit the splash screen and get to the GRUB 2 text-based menu.

  3. Press B to enter the GRUB 2 editor.

  4. Add the following parameter to the line containing the kernel parameters:

    systemd.unit=rescue.target
  5. Press F10.

  6. Enter the user name and password for root.

  7. Make all the necessary changes.

  8. Boot into the full multiuser and network mode by entering systemctl isolate graphical.target at the command line.

34.4.2 Valid User Name and Password Not Accepted

This is by far the most common problem users encounter, because there are many reasons this can occur. Depending on whether you use local user management and authentication or network authentication, login failures occur for different reasons.

Local user management can fail for the following reasons:

  • The user may have entered the wrong password.

  • The user's home directory containing the desktop configuration files is corrupted or write protected.

  • There may be problems with the X Window System authenticating this particular user, especially if the user's home directory has been used with another Linux distribution prior to installing the current one.

To locate the reason for a local login failure, proceed as follows:

  1. Check whether the user remembered his password correctly before you start debugging the whole authentication mechanism. If the user may not remember his password correctly, use the YaST User Management module to change the user's password. Pay attention to the Caps Lock key and unlock it, if necessary.

  2. Log in as root and check the system journal with journalctl -e for error messages of the login process and of PAM.

  3. Try to log in from a console (using CtrlAltF1). If this is successful, the blame cannot be put on PAM, because it is possible to authenticate this user on this machine. Try to locate any problems with the X Window System or the GNOME desktop. For more information, refer to Section 34.4.4, “Login Successful but GNOME Desktop Fails”.

  4. If the user's home directory has been used with another Linux distribution, remove the Xauthority file in the user's home. Use a console login via CtrlAltF1 and run rm .Xauthority as this user. This should eliminate X authentication problems for this user. Try graphical login again.

  5. If the desktop could not start because of corrupt configuration files, proceed with Section 34.4.4, “Login Successful but GNOME Desktop Fails”.

In the following, common reasons a network authentication for a particular user may fail on a specific machine are listed:

  • The user may have entered the wrong password.

  • The user name exists in the machine's local authentication files and is also provided by a network authentication system, causing conflicts.

  • The home directory exists but is corrupt or unavailable. Perhaps it is write protected or is on a server that is inaccessible at the moment.

  • The user does not have permission to log in to that particular host in the authentication system.

  • The machine has changed host names, for whatever reason, and the user does not have permission to log in to that host.

  • The machine cannot reach the authentication server or directory server that contains that user's information.

  • There may be problems with the X Window System authenticating this particular user, especially if the user's home has been used with another Linux distribution prior to installing the current one.

To locate the cause of the login failures with network authentication, proceed as follows:

  1. Check whether the user remembered their password correctly before you start debugging the whole authentication mechanism.

  2. Determine the directory server which the machine relies on for authentication and make sure that it is up and running and properly communicating with the other machines.

  3. Determine that the user's user name and password work on other machines to make sure that his authentication data exists and is properly distributed.

  4. See if another user can log in to the misbehaving machine. If another user can log in without difficulty or if root can log in, log in and examine the system journal with journalctl -e> file. Locate the time stamps that correspond to the login attempts and determine if PAM has produced any error messages.

  5. Try to log in from a console (using CtrlAltF1). If this is successful, the problem is not with PAM or the directory server on which the user's home is hosted, because it is possible to authenticate this user on this machine. Try to locate any problems with the X Window System or the GNOME desktop. For more information, refer to Section 34.4.4, “Login Successful but GNOME Desktop Fails”.

  6. If the user's home directory has been used with another Linux distribution, remove the Xauthority file in the user's home. Use a console login via CtrlAltF1 and run rm .Xauthority as this user. This should eliminate X authentication problems for this user. Try graphical login again.

  7. If the desktop could not start because of corrupt configuration files, proceed with Section 34.4.4, “Login Successful but GNOME Desktop Fails”.

34.4.3 Login to Encrypted Home Partition Fails

It is recommended to use an encrypted home partition for laptops. If you cannot log in to your laptop, the reason is usually simple: your partition could not be unlocked.

During the boot time, you need to enter the passphrase to unlock your encrypted partition. If you do not enter it, the boot process continues, leaving the partition locked.

To unlock your encrypted partition, proceed as follows:

  1. Switch to the text console with CtrlAltF1.

  2. Become root.

  3. Restart the unlocking process again with:

    systemctl restart home.mount
  4. Enter your passphrase to unlock your encrypted partition.

  5. Exit the text console and switch back to the login screen with AltF7.

  6. Log in as usual.

34.4.4 Login Successful but GNOME Desktop Fails

If this is the case, it is likely that your GNOME configuration files have become corrupted. Some symptoms may include the keyboard failing to work, the screen geometry becoming distorted, or even the screen coming up as a bare gray field. The important distinction is that if another user logs in, the machine works normally. It is then likely that the problem can be fixed relatively quickly by simply moving the user's GNOME configuration directory to a new location, which causes GNOME to initialize a new one. Although the user is forced to reconfigure GNOME, no data is lost.

  1. Switch to a text console by pressing CtrlAltF1.

  2. Log in with your user name.

  3. Move the user's GNOME configuration directories to a temporary location:

    mv .gconf  .gconf-ORIG-RECOVER
    mv .gnome2 .gnome2-ORIG-RECOVER
  4. Log out.

  5. Log in again, but do not run any applications.

  6. Recover your individual application configuration data (including the Evolution e-mail client data) by copying the ~/.gconf-ORIG-RECOVER/apps/ directory back into the new ~/.gconf directory as follows:

    cp -a .gconf-ORIG-RECOVER/apps .gconf/

    If this causes the login problems, attempt to recover only the critical application data and reconfigure the remainder of the applications.

34.5 Network Problems

Many problems of your system may be network-related, even though they do not seem to be at first. For example, the reason for a system not allowing users to log in may be a network problem of some kind. This section introduces a simple checklist you can apply to identify the cause of any network problem encountered.

Procedure 34.6: How to Identify Network Problems

When checking the network connection of your machine, proceed as follows:

  1. If you use an Ethernet connection, check the hardware first. Make sure that your network cable is properly plugged into your computer and router (or hub, etc.). The control lights next to your Ethernet connector are normally both be active.

    If the connection fails, check whether your network cable works with another machine. If it does, your network card causes the failure. If hubs or switches are included in your network setup, they may be faulty, as well.

  2. If using a wireless connection, check whether the wireless link can be established by other machines. If not, contact the wireless network's administrator.

  3. Once you have checked your basic network connectivity, try to find out which service is not responding. Gather the address information of all network servers needed in your setup. Either look them up in the appropriate YaST module or ask your system administrator. The following list gives some typical network servers involved in a setup together with the symptoms of an outage.

    DNS (Name Service)

    A broken or malfunctioning name service affects the network's functionality in many ways. If the local machine relies on any network servers for authentication and these servers cannot be found because of name resolution issues, users would not even be able to log in. Machines in the network managed by a broken name server would not be able to see each other and communicate.

    NTP (Time Service)

    A malfunctioning or completely broken NTP service could affect Kerberos authentication and X server functionality.

    NFS (File Service)

    If any application needs data stored in an NFS mounted directory, it cannot start or function properly if this service was down or misconfigured. In the worst case scenario, a user's personal desktop configuration would not come up if their home directory containing the .gconf subdirectory could not be found because of a faulty NFS server.

    Samba (File Service)

    If any application needs data stored in a directory on a faulty Samba server, it cannot start or function properly.

    NIS (User Management)

    If your SUSE Linux Enterprise Desktop system relies on a faulty NIS server to provide the user data, users cannot log in to this machine.

    LDAP (User Management)

    If your SUSE Linux Enterprise Desktop system relies on a faulty LDAP server to provide the user data, users cannot log in to this machine.

    Kerberos (Authentication)

    Authentication will not work and login to any machine fails.

    CUPS (Network Printing)

    Users cannot print.

  4. Check whether the network servers are running and whether your network setup allows you to establish a connection:

    Important
    Important: Limitations

    The debugging procedure described below only applies to a simple network server/client setup that does not involve any internal routing. It assumes both server and client are members of the same subnet without the need for additional routing.

    1. Use ping IP_ADDRESS/HOSTNAME (replace with the host name or IP address of the server) to check whether each one of them is up and responding to the network. If this command is successful, it tells you that the host you were looking for is up and running and that the name service for your network is configured correctly.

      If ping fails with destination host unreachable, either your system or the desired server is not properly configured or down. Check whether your system is reachable by running ping IP address or YOUR_HOSTNAME from another machine. If you can reach your machine from another machine, it is the server that is not running or not configured correctly.

      If ping fails with unknown host, the name service is not configured correctly or the host name used was incorrect. For further checks on this matter, refer to Step 4.b. If ping still fails, either your network card is not configured correctly or your network hardware is faulty.

    2. Use host HOSTNAME to check whether the host name of the server you are trying to connect to is properly translated into an IP address and vice versa. If this command returns the IP address of this host, the name service is up and running. If the host command fails, check all network configuration files relating to name and address resolution on your host:

      /etc/resolv.conf

      This file is used to keep track of the name server and domain you are currently using. It can be modified manually or automatically adjusted by YaST or DHCP. Automatic adjustment is preferable. However, make sure that this file has the following structure and all network addresses and domain names are correct:

      search FULLY_QUALIFIED_DOMAIN_NAME
      nameserver IPADDRESS_OF_NAMESERVER

      This file can contain more than one name server address, but at least one of them must be correct to provide name resolution to your host. If needed, adjust this file using the YaST Network Settings module (Hostname/DNS tab).

      If your network connection is handled via DHCP, enable DHCP to change host name and name service information by selecting Set Hostname via DHCP (can be set globally for any interface or per interface) and Update Name Servers and Search List via DHCP in the YaST Network Settings module (Hostname/DNS tab).

      /etc/nsswitch.conf

      This file tells Linux where to look for name service information. It should look like this:

       ...
      hosts: files dns
      networks: files dns
      ...

      The dns entry is vital. It tells Linux to use an external name server. Normally, these entries are automatically managed by YaST, but it would be prudent to check.

      If all the relevant entries on the host are correct, let your system administrator check the DNS server configuration for the correct zone information. If you have made sure that the DNS configuration of your host and the DNS server are correct, proceed with checking the configuration of your network and network device.

    3. If your system cannot establish a connection to a network server and you have excluded name service problems from the list of possible culprits, check the configuration of your network card.

      Use the command ip addr show NETWORK_DEVICE to check whether this device was properly configured. Make sure that the inet address with the netmask (/MASK) is configured correctly. An error in the IP address or a missing bit in your network mask would render your network configuration unusable. If necessary, perform this check on the server as well.

    4. If the name service and network hardware are properly configured and running, but some external network connections still get long time-outs or fail entirely, use traceroute FULLY_QUALIFIED_DOMAIN_NAME (executed as root) to track the network route these requests are taking. This command lists any gateway (hop) that a request from your machine passes on its way to its destination. It lists the response time of each hop and whether this hop is reachable. Use a combination of traceroute and ping to track down the culprit and let the administrators know.

Once you have identified the cause of your network trouble, you can resolve it yourself (if the problem is located on your machine) or let the system administrators of your network know about your findings so they can reconfigure the services or repair the necessary systems.

34.5.1 NetworkManager Problems

If you have a problem with network connectivity, narrow it down as described in Procedure 34.6, “How to Identify Network Problems”. If NetworkManager seems to be the culprit, proceed as follows to get logs providing hints on why NetworkManager fails:

  1. Open a shell and log in as root.

  2. Restart the NetworkManager:

    systemctl restart Network.Manager
  3. Open a Web page, for example, http://www.opensuse.org as normal user to see, if you can connect.

  4. Collect any information about the state of NetworkManager in /var/log/NetworkManager.

For more information about NetworkManager, refer to Chapter 30, Using NetworkManager.

34.6 Data Problems

Data problems are when the machine may or may not boot properly but, in either case, it is clear that there is data corruption on the system and that the system needs to be recovered. These situations call for a backup of your critical data, enabling you to recover the system state from before your system failed.

34.6.1 Managing Partition Images

Sometimes you need to perform a backup from an entire partition or even hard disk. Linux comes with the dd tool which can create an exact copy of your disk. Combined with gzip you save some space.

Procedure 34.7: Backing up and Restoring Hard Disks
  1. Start a Shell as user root.

  2. Select your source device. Typically this is something like /dev/sda (labeled as SOURCE).

  3. Decide where you want to store your image (labeled as BACKUP_PATH). It must be different from your source device. In other words: if you make a backup from /dev/sda, your image file must not to be stored under /dev/sda.

  4. Run the commands to create a compressed image file:

    dd if=/dev/SOURCE | gzip > /BACKUP_PATH/image.gz
  5. Restore the hard disk with the following commands:

    gzip -dc /BACKUP_PATH/image.gz | dd of=/dev/SOURCE

If you only need to back up a partition, replace the SOURCE placeholder with your respective partition. In this case, your image file can lie on the same hard disk, but on a different partition.

34.6.2 Using the Rescue System

  • Filename: system_repair.xml
  • ID: sec.trouble.data.recover.rescue

There are several reasons a system could fail to come up and run properly. A corrupted file system following a system crash, corrupted configuration files, or a corrupted boot loader configuration are the most common ones.

To help you to resolve these situations, SUSE Linux Enterprise Desktop contains a rescue system that you can boot. The rescue system is a small Linux system that can be loaded into a RAM disk and mounted as root file system, allowing you to access your Linux partitions from the outside. Using the rescue system, you can recover or modify any important aspect of your system.

  • Manipulate any type of configuration file.

  • Check the file system for defects and start automatic repair processes.

  • Access the installed system in a change root environment.

  • Check, modify, and re-install the boot loader configuration.

  • Recover from a badly installed device driver or unusable kernel.

  • Resize partitions using the parted command. Find more information about this tool at the GNU Parted Web site http://www.gnu.org/software/parted/parted.html.

The rescue system can be loaded from various sources and locations. The simplest option is to boot the rescue system from the original installation medium.

  1. Insert the installation medium into your DVD drive.

  2. Reboot the system.

  3. At the boot screen, press F4 and choose DVD-ROM. Then choose Rescue System from the main menu.

  4. Enter root at the Rescue: prompt. A password is not required.

If your hardware setup does not include a DVD drive, you can boot the rescue system from a network source. The following example applies to a remote boot scenario—if using another boot medium, such as a DVD, modify the info file accordingly and boot as you would for a normal installation.

  1. Enter the configuration of your PXE boot setup and add the lines install=PROTOCOL://INSTSOURCE and rescue=1. If you need to start the repair system, use repair=1 instead. As with a normal installation, PROTOCOL stands for any of the supported network protocols (NFS, HTTP, FTP, etc.) and INSTSOURCE for the path to your network installation source.

  2. Boot the system using Wake on LAN, as described in Section 6.7, “Wake on LAN”.

  3. Enter root at the Rescue: prompt. A password is not required.

Once you have entered the rescue system, you can use the virtual consoles that can be reached with AltF1 to AltF6.

A shell and other useful utilities, such as the mount program, are available in the /bin directory. The /sbin directory contains important file and network utilities for reviewing and repairing the file system. This directory also contains the most important binaries for system maintenance, such as fdisk, mkfs, mkswap, mount, and shutdown, ip and ss for maintaining the network. The directory /usr/bin contains the vi editor, find, less, and SSH.

To see the system messages, either use the command dmesg or view the system log with journalctl.

34.6.2.1 Checking and Manipulating Configuration Files

As an example for a configuration that might be fixed using the rescue system, imagine you have a broken configuration file that prevents the system from booting properly. You can fix this using the rescue system.

To manipulate a configuration file, proceed as follows:

  1. Start the rescue system using one of the methods described above.

  2. To mount a root file system located under /dev/sda6 to the rescue system, use the following command:

    mount /dev/sda6 /mnt

    All directories of the system are now located under /mnt

  3. Change the directory to the mounted root file system:

    cd /mnt
  4. Open the problematic configuration file in the vi editor. Adjust and save the configuration.

  5. Unmount the root file system from the rescue system:

    umount /mnt
  6. Reboot the machine.

34.6.2.2 Repairing and Checking File Systems

Generally, file systems cannot be repaired on a running system. If you encounter serious problems, you may not even be able to mount your root file system and the system boot may end with a kernel panic. In this case, the only way is to repair the system from the outside. The system contains the utilities to check and repair the btrfs, ext2, ext3, ext4, reiserfs, xfs, dosfs, and vfat file systems. Look for the command fsck. FILESYSTEM, for example, if you need a file system check for btrfs, use fsck.btrfs.

34.6.2.3 Accessing the Installed System

If you need to access the installed system from the rescue system, you need to do this in a change root environment. For example, to modify the boot loader configuration, or to execute a hardware configuration utility.

To set up a change root environment based on the installed system, proceed as follows:

  1. Tip
    Tip: Import LVM Volume Groups

    If you are using a LVM setup (refer to Section 9.2, “LVM Configuration” for more general details), import all existing volume groups in order to be able to find and mount the device(s):

    rootvgimport -a

    Run lsblk to check which node corresponds to the root partition. It is /dev/sda2 in our example:

    lsblk
    NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
    sda           8:0    0 149,1G  0 disk
    ├─sda1        8:1    0     2G  0 part  [SWAP]
    ├─sda2        8:2    0    20G  0 part  /
    └─sda3        8:3    0   127G  0 part
      └─cr_home 254:0    0   127G  0 crypt /home
  2. Mount the root partition from the installed system:

    mount /dev/sda2 /mnt
  3. Mount /proc, /dev, and /sys partitions:

    mount -t proc none /mnt/proc
    mount --rbind /dev /mnt/dev
    mount --rbind /sys /mnt/sys
  4. Now you can change root into the new environment, keeping the bash shell:

    chroot /mnt /bin/bash
  5. Finally, mount the remaining partitions from the installed system:

    mount -a
  6. Now you have access to the installed system. Before rebooting the system, unmount the partitions with umount -a and leave the change root environment with exit.

Warning
Warning: Limitations

Although you have full access to the files and applications of the installed system, there are some limitations. The kernel that is running is the one that was booted with the rescue system, not with the change root environment. It only supports essential hardware and it is not possible to add kernel modules from the installed system unless the kernel versions are identical. Always check the version of the currently running (rescue) kernel with uname -r and then find out if a matching subdirectory exists in the /lib/modules directory in the change root environment. If yes, you can use the installed modules, otherwise you need to supply their correct versions on other media, such as a flash disk. Most often the rescue kernel version differs from the installed one — then you cannot simply access a sound card, for example. It is also not possible to start a graphical user interface.

Also note that you leave the change root environment when you switch the console with AltF1 to AltF6.

34.6.2.4 Modifying and Re-installing the Boot Loader

Sometimes a system cannot boot because the boot loader configuration is corrupted. The start-up routines cannot, for example, translate physical drives to the actual locations in the Linux file system without a working boot loader.

To check the boot loader configuration and re-install the boot loader, proceed as follows:

  1. Perform the necessary steps to access the installed system as described in Section 34.6.2.3, “Accessing the Installed System”.

  2. Check that the GRUB 2 boot loader is installed on the system. If not, install the package grub2 and run

    grub2-install /dev/sda
  3. Check whether the following files are correctly configured according to the GRUB 2 configuration principles outlined in Chapter 13, The Boot Loader GRUB 2 and apply fixes if necessary.

    • /etc/default/grub

    • /boot/grub2/device.map (optional file, only present if created manually)

    • /boot/grub2/grub.cfg (this file is generated, do not edit)

    • /etc/sysconfig/bootloader

  4. Re-install the boot loader using the following command sequence:

    grub2-mkconfig -o /boot/grub2/grub.cfg
  5. Unmount the partitions, log out from the change root environment, and reboot the system:

    umount -a
    exit
    reboot

34.6.2.5 Fixing Kernel Installation

A kernel update may introduce a new bug which can impact the operation of your system. For example a driver for a piece of hardware in your system may be faulty, which prevents you from accessing and using it. In this case, revert to the last working kernel (if available on the system) or install the original kernel from the installation media.

Tip
Tip: How to Keep Last Kernels after Update

To prevent failures to boot after a faulty kernel update, use the kernel multiversion feature and tell libzypp which kernels you want to keep after the update.

For example to always keep the last two kernels and the currently running one, add

multiversion.kernels = latest,latest-1,running

to the /etc/zypp/zypp.conf file. See Chapter 12, Installing Multiple Kernel Versions for more information.

A similar case is when you need to re-install or update a broken driver for a device not supported by SUSE Linux Enterprise Desktop. For example when a hardware vendor uses a specific device, such as a hardware RAID controller, which needs a binary driver to be recognized by the operating system. The vendor typically releases a Driver Update Disk (DUD) with the fixed or updated version of the required driver.

In both cases you need to access the installed system in the rescue mode and fix the kernel related problem, otherwise the system may fail to boot correctly:

  1. Boot from the SUSE Linux Enterprise Desktop installation media.

  2. If you are recovering after a faulty kernel update, skip this step. If you need to use a driver update disk (DUD), press F6 to load the driver update after the boot menu appears, and choose the path or URL to the driver update and confirm with Yes.

  3. Choose Rescue System from the boot menu and press Enter. If you chose to use DUD, you will be asked to specify where the driver update is stored.

  4. Enter root at the Rescue: prompt. A password is not required.

  5. Manually mount the target system and change root into the new environment. For more information, see Section 34.6.2.3, “Accessing the Installed System”.

  6. If using DUD, install/re-install/update the faulty device driver package. Always make sure the installed kernel version exactly matches the version of the driver you are installing.

    If fixing faulty kernel update installation, you can install the original kernel from the installation media with the following procedure.

    1. Identify your DVD device with hwinfo --cdrom and mount it with mount /dev/sr0 /mnt.

    2. Navigate to the directory where your kernel files are stored on the DVD, for example cd /mnt/suse/x86_64/.

    3. Install required kernel-*, kernel-*-base, and kernel-*-extra packages of your flavor with the rpm -i command.

  7. Update configuration files and reinitialize the boot loader if needed. For more information, see Section 34.6.2.4, “Modifying and Re-installing the Boot Loader”.

  8. Remove any bootable media from the system drive and reboot.

A Documentation Updates

  • Filename: admin_docupdates.xml
  • ID: app.admin.docupdates

This chapter lists content changes for this document.

This manual was updated on the following dates:

A.1 January 2018 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP3)

A.2 December 2017 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP3)

General

A.3 September 2017 (Initial Release of SUSE Linux Enterprise Desktop 12 SP3)

General
Chapter 4, YaST
Chapter 5, YaST in Text Mode
Chapter 6, Managing Software with Command Line Tools
Chapter 7, System Recovery and Snapshot Management with Snapper
Chapter 8, Remote Access with VNC
Chapter 9, File Copying with RSync
  • Completely revised former File Synchronization chapter and focused on Rsync.

Chapter 14, The systemd Daemon
Chapter 17, Basic Networking
Chapter 22, Dynamic Kernel Device Management with udev
  • Fixed udevadm commands.

Chapter 23, Live Patching the Linux Kernel Using kGraft

Updated Section 23.4, “Patch Lifecycle” (Fate #322212).

Chapter 24, Special System Features
Part II, “Booting a Linux System”
  • Reordered included chapters so that they follow the boot process order.

Bugfixes

A.4 November 2016 (Initial Release of SUSE Linux Enterprise Desktop 12 SP2)

General
  • The e-mail address for documentation feedback has changed to doc-team@suse.com.

  • The documentation for Docker has been enhanced and renamed to Docker Guide.

Chapter 3, YaST Online Update
Chapter 6, Managing Software with Command Line Tools
  • zypper patch no longer installs optional patches by default. To install optional patches, use the --with-optional parameter (FATE#320447).

Chapter 7, System Recovery and Snapshot Management with Snapper
Chapter 11, Introduction to the Booting Process
  • Advised users to repair file system in case root file system fails on boot time (FATE#320443).

Chapter 13, The Boot Loader GRUB 2
Chapter 17, Basic Networking
Chapter 25, Time Synchronization with NTP
  • Added information on the Synchronize without Daemon start-up option. Chroot jail is no longer the default (FATE #320392).

Bugfixes

A.5 March 2016 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP1)

Chapter 11, Introduction to the Booting Process

Added a note about initramfs migration from swap to LVM (https://bugzilla.suse.com/show_bug.cgi?id=).

A.6 December 2015 (Initial Release of SUSE Linux Enterprise Desktop 12 SP1)

General
  • SMT Guide is now part of the documentation for SUSE Linux Enterprise Desktop.

  • Add-ons provided by SUSE have been renamed as modules and extensions. The manuals have been updated to reflect this change.

  • Numerous small fixes and additions to the documentation, based on technical feedback.

  • The registration service has been changed from Novell Customer Center to SUSE Customer Center.

  • In YaST, you will now reach Network Settings via the System group. Network Devices is gone (https://bugzilla.suse.com/show_bug.cgi?id=867809).

Chapter 7, System Recovery and Snapshot Management with Snapper
Chapter 8, Remote Access with VNC
Chapter 6, Managing Software with Command Line Tools
Chapter 16, journalctl: Query the systemd Journal
Chapter 13, The Boot Loader GRUB 2
  • Updated/simplified the whole chapter to match the latest GRUB version, both command line and YaST version.

Chapter 12, UEFI (Unified Extensible Firmware Interface)
Chapter 17, Basic Networking
Available Data Synchronization Software
  • Mentioned cloud computing for file synchronization.

Chapter 34, Common Problems and Their Solutions
Part III, “System”
Bugfixes

A.7 February 2015 (Documentation Maintenance Update)

Chapter 20, Accessing File Systems with FUSE
  • Only the ntfs-3g plug-in is shipped with SUSE Linux Enterprise Desktop (Doc Comment #26799).

Chapter 14, The systemd Daemon

A typo in a command has been fixed (https://bugzilla.suse.com/show_bug.cgi?id=900219).

A.8 October 2014 (Initial Release of SUSE Linux Enterprise Desktop 12)

General
  • Removed all KDE documentation and references because KDE is no longer shipped.

  • Removed all references to SuSEconfig, which is no longer supported (Fate #100011).

  • Move from System V init to systemd (Fate #310421). Updated affected parts of the documentation.

  • YaST Runlevel Editor has changed to Services Manager (Fate #312568). Updated affected parts of the documentation.

  • Removed all references to ISDN support, as ISDN support has been removed (Fate #314594).

  • Removed all references to the YaST DSL module as it is no longer shipped (Fate #316264).

  • Removed all references to the YaST Modem module as it is no longer shipped (Fate #316264).

  • Btrfs has become the default file system for the root partition (Fate #315901). Updated affected parts of the documentation.

  • The dmesg now provides human-readable time stamps in ctime()-like format (Fate #316056). Updated affected parts of the documentation.

  • syslog and syslog-ng have been replaced by rsyslog (Fate #316175). Updated affected parts of the documentation.

  • MariaDB is now shipped as the relational database instead of MySQL (Fate #313595). Updated affected parts of the documentation.

  • SUSE-related products are no longer available from http://download.novell.com but from http://download.suse.com. Adjusted links accordingly.

  • Novell Customer Center has been replaced with SUSE Customer Center. Updated affected parts of the documentation.

  • /var/run is mounted as tmpfs (Fate #303793). Updated affected parts of the documentation.

  • The following architectures are no longer supported: IA64 and x86. Updated affected parts of the documentation.

  • The traditional method for setting up the network with ifconfig has been replaced by wicked. Updated affected parts of the documentation.

  • A lot of networking commands are deprecated and have been replaced by newer commands (usually ip). Updated affected parts of the documentation.

    arp: ip neighbor
    ifconfig: ip addr, ip link
    iptunnel: ip tunnel
    iwconfig: iw
    nameif: ip link, ifrename
    netstat: ss, ip route, ip -s link, ip maddr
    route: ip route
  • Numerous small fixes and additions to the documentation, based on technical feedback.

Chapter 3, YaST Online Update
  • YaST provides an option to enable or disable the use of delta RPMs (Fate #314867).

  • Before installing patches that require a reboot, you are notified by YaST and can choose how to proceed.

Chapter 5, YaST in Text Mode
  • Added information on how to filter and select packages in the software installation module.

Chapter 6, Managing Software with Command Line Tools
Chapter 7, System Recovery and Snapshot Management with Snapper
Chapter 8, Remote Access with VNC
  • The default VNC viewer is now tigervnc.

  • Added corrections on window manager start-up in persistent VNC sessions.

Chapter 11, Introduction to the Booting Process
  • Significantly shortened the chapter, because System V init has been replaced by systemd. systemd is now described in a separate chapter: Chapter 14, The systemd Daemon.

Chapter 14, The systemd Daemon
Chapter 16, journalctl: Query the systemd Journal

Added a new chapter (http://bugzilla.suse.com/show_bug.cgi?id=878352).

Chapter 13, The Boot Loader GRUB 2
Chapter 12, UEFI (Unified Extensible Firmware Interface)
  • Updated the chapter and added new features (Fate #314510, Fate #316365).

  • Added instructions on where to find the SUSE Key certificate (Doc Comment #25080).

Chapter 18, Printer Operation

Updated chapter and section according to new CUPS version and with PDF now being a common printing data format (Fate #314630).

Chapter 19, The X Window System
Chapter 17, Basic Networking
Chapter 27, Samba
Chapter 26, Sharing File Systems with NFS
  • Configuring NFSv4 shares is now mostly similar to NFSv3, especially the previously required bind mount setting is now deprecated (Fate #315589).

  • Removed section about NFS server configuration.

Chapter 28, On-Demand Mounting with Autofs
  • Added a chapter on autofs (Fate #316185).

Chapter 31, Power Management
  • Removed obsolete references to the pm-utils package.

Chapter 34, Common Problems and Their Solutions
Wi-Fi Configuration
Tablet PCs
  • Removed deprecated chapter about tablet PCs.

Bugfixes

B An Example Network

  • Filename: network_scheme.xml
  • ID: app.nwscheme

This example network is used across all network-related chapters of the SUSE® Linux Enterprise Desktop documentation.

SUSE Linux Enterprise Desktop 12 SP3

Deployment Guide

Shows how to install single or multiple systems and how to exploit the product inherent capabilities for a deployment infrastructure. Choose from various approaches, ranging from a local installation or a network installation server to a mass deployment using a remote-controlled, highly-customized, and automated installation technique.

Publication Date: May 07, 2018
About This Guide
Required Background
Available Documentation
Feedback
Documentation Conventions
1 Planning for SUSE Linux Enterprise Desktop
1.1 Hardware Requirements
1.2 Reasons to Use SUSE Linux Enterprise Desktop
I Installation Preparation
2 Installation on AMD64 and Intel 64
2.1 System Requirements for Operating Linux
2.2 Installation Considerations
2.3 Boot and Installation Media
2.4 Installation Procedure
2.5 Controlling the Installation
2.6 Dealing with Boot and Installation Problems
II The Installation Workflow
3 Installation with YaST
3.1 Choosing the Installation Method
3.2 System Start-up for Installation
3.3 Steps of the Installation
3.4 Installer Self-Update
3.5 Language, Keyboard and License Agreement
3.6 Network Settings
3.7 SUSE Customer Center Registration
3.8 Extension Selection
3.9 Suggested Partitioning
3.10 Clock and Time Zone
3.11 Create New User
3.12 Password for the System Administrator root
3.13 Installation Settings
3.14 Performing the Installation
4 Cloning Disk Images
4.1 Cleaning Up Unique System Identifiers
III Setting Up an Installation Server
5 Setting Up the Server Holding the Installation Sources
5.1 Setting Up an Installation Server Using YaST
5.2 Setting Up an NFS Repository Manually
5.3 Setting Up an FTP Repository Manually
5.4 Setting Up an HTTP Repository Manually
5.5 Managing an SMB Repository
5.6 Using ISO Images of the Installation Media on the Server
6 Preparing the Boot of the Target System
6.1 Setting Up a DHCP Server
6.2 Setting Up a TFTP Server
6.3 Installing Files on TFTP Server
6.4 PXELINUX Configuration Options
6.5 Preparing the Target System for PXE Boot
6.6 Preparing the Target System for Wake on LAN
6.7 Wake on LAN
6.8 Wake on LAN with YaST
6.9 Booting from CD or USB Drive Instead of PXE
IV Remote Installation
7 Remote Installation
7.1 Installation Scenarios for Remote Installation
7.2 Booting the Target System for Installation
7.3 Monitoring the Installation Process
V Initial System Configuration
8 Setting Up Hardware Components with YaST
8.1 Setting Up Your System Keyboard Layout
8.2 Setting Up Sound Cards
8.3 Setting Up a Printer
8.4 Setting Up a Scanner
9 Advanced Disk Setup
9.1 Using the YaST Partitioner
9.2 LVM Configuration
9.3 Soft RAID Configuration with YaST
10 Installing or Removing Software
10.1 Definition of Terms
10.2 Registering Installed System
10.3 Using the YaST Software Manager
10.4 Managing Software Repositories and Services
10.5 Keeping the System Up-to-date
11 Installing Modules, Extensions, and Third Party Add-On Products
11.1 List of Optional Modules
11.2 Installing Modules and Extensions from Online Channels
11.3 Installing Extensions and Third Party Add-On Products from Media
11.4 SUSE Software Development Kit (SDK) 12 SP3
11.5 SUSE Package Hub
12 Installing Multiple Kernel Versions
12.1 Enabling and Configuring Multiversion Support
12.2 Installing/Removing Multiple Kernel Versions with YaST
12.3 Installing/Removing Multiple Kernel Versions with Zypper
13 Managing Users with YaST
13.1 User and Group Administration Dialog
13.2 Managing User Accounts
13.3 Additional Options for User Accounts
13.4 Changing Default Settings for Local Users
13.5 Assigning Users to Groups
13.6 Managing Groups
13.7 Changing the User Authentication Method
14 Changing Language and Country Settings with YaST
14.1 Changing the System Language
14.2 Changing the Country and Time Settings
VI Updating and Upgrading SUSE Linux Enterprise
15 Life Cycle and Support
15.1 Terminology
15.2 Product Life Cycle
15.3 Module Life Cycles
15.4 Generating Periodic Life Cycle Report
15.5 Support Levels
15.6 Repository Model
16 Upgrading SUSE Linux Enterprise
16.1 Supported Upgrade Paths to SLE 12 SP3
16.2 Online and Offline Upgrade
16.3 Preparing the System
17 Upgrading Offline
17.1 Conceptual Overview
17.2 Starting the Upgrade from Installation Medium
17.3 Starting Upgrade from Network Source
17.4 Starting Upgrade from Hard Disk
17.5 Enabling Automatic Upgrade
17.6 Upgrading SUSE Linux Enterprise
17.7 Updating via SUSE Manager
17.8 Updating Registration Status after Rollback
17.9 Registering Your System
18 Upgrading Online
18.1 Conceptual Overview
18.2 Service Pack Migration Workflow
18.3 Canceling Service Pack Migration
18.4 Upgrading with the Online Migration Tool (YaST)
18.5 Upgrading with Zypper
18.6 Upgrading with Plain Zypper
18.7 Rolling Back a Service Pack
19 Backporting Source Code
19.1 Reasons for Backporting
19.2 Reasons against Backports
19.3 The Implications of Backports for Interpreting Version Numbers
19.4 How to Check Which Bugs are Fixed and Which Features are Backported and Available
A Documentation Updates
A.1 January 2018 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP3)
A.2 September 2017 (Initial Release of SUSE Linux Enterprise Desktop 12 SP3)
A.3 April 2017 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP2)
A.4 November 2016 (Initial Release of SUSE Linux Enterprise Desktop 12 SP2)
A.5 March 2016 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP1)
A.6 December 2015 (Initial Release of SUSE Linux Enterprise Desktop 12 SP1)
A.7 February 2015 (Documentation Maintenance Update)
A.8 October 2014 (Initial Release of SUSE Linux Enterprise Desktop 12)
B GNU Licenses
B.1 GNU Free Documentation License

Copyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

About This Guide

  • Filename: deployment_intro.xml
  • ID: preface.deployment

Installations of SUSE Linux Enterprise Desktop are possible in different ways. It is impossible to cover all combinations of boot, or installation server, automated installations or deploying images. This manual should help with selecting the appropriate method of deployment for your installation.

Part II, “The Installation Workflow”

Most tasks that are needed during installations are described here. This includes the manual setup of your computer and installation of additional software.

Part III, “Setting Up an Installation Server”

SUSE® Linux Enterprise Desktop can be installed in different ways. Apart from the usual media installation, you can choose from various network-based approaches. This part describes setting up an installation server and how to prepare the boot of the target system for installation.

Part IV, “Remote Installation”

This part introduces the most common installation scenarios for remote installations. While some still require user interaction or some degree of physical access to the target system, others are completely automated and hands-off. Learn which approach is best for your scenario.

Part V, “Initial System Configuration”

Learn how to configure your system after installation. This part covers common tasks like setting up hardware components, installing or removing software, managing users, or changing settings with YaST.

Part VI, “Updating and Upgrading SUSE Linux Enterprise”

This part will give you some background information on terminology, SUSE product lifecycles and Service Pack releases, and recommended upgrade policies.

1 Required Background

  • Filename: common_intro_target_audience_i.xml
  • ID: cha.generic

To keep the scope of these guidelines manageable, certain technical assumptions have been made:

  • You have some computer experience and are familiar with common technical terms.

  • You are familiar with the documentation for your system and the network on which it runs.

  • You have a basic understanding of Linux systems.

2 Available Documentation

  • Filename: common_intro_available_doc_i.xml
  • ID: no ID found
Note
Note: Online Documentation and Latest Updates

Documentation for our products is available at http://www.suse.com/documentation/, where you can also find the latest updates, and browse or download the documentation in various formats.

In addition, the product documentation is usually available in your installed system under /usr/share/doc/manual.

The following documentation is available for this product:

Installation Quick Start

Lists the system requirements and guides you step-by-step through the installation of SUSE Linux Enterprise Desktop from DVD, or from an ISO image.

Deployment Guide

Shows how to install single or multiple systems and how to exploit the product inherent capabilities for a deployment infrastructure. Choose from various approaches, ranging from a local installation or a network installation server to a mass deployment using a remote-controlled, highly-customized, and automated installation technique.

Administration Guide

Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.

Security Guide

Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.

System Analysis and Tuning Guide

An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.

GNOME User Guide

Introduces the GNOME desktop of SUSE Linux Enterprise Desktop. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.

3 Feedback

  • Filename: common_intro_feedback_i.xml
  • ID: no ID found

Several feedback channels are available:

Bugs and Enhancement Requests

For services and support options available for your product, refer to http://www.suse.com/support/.

Help for openSUSE is provided by the community. Refer to https://en.opensuse.org/Portal:Support for more information.

To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click Create New.

User Comments

We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/documentation/feedback.html and enter your comments there.

Mail

For feedback on the documentation of this product, you can also send a mail to doc-team@suse.com. Make sure to include the document title, the product version and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).

4 Documentation Conventions

  • Filename: common_intro_typografie_i.xml
  • ID: no ID found

The following notices and typographical conventions are used in this documentation:

  • /etc/passwd: directory names and file names

  • PLACEHOLDER: replace PLACEHOLDER with the actual value

  • PATH: the environment variable PATH

  • ls, --help: commands, options, and parameters

  • user: users or groups

  • package name : name of a package

  • Alt, AltF1: a key to press or a key combination; keys are shown in uppercase as on a keyboard

  • File, File › Save As: menu items, buttons

  • Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.

  • Commands that must be run with root privileges. Often you can also prefix these commands with the sudo command to run them as non-privileged user.

    root # command
    tux > sudo command
  • Commands that can be run by non-privileged users.

    tux > command
  • Notices

    Warning
    Warning: Warning Notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important
    Important: Important Notice

    Important information you should be aware of before proceeding.

    Note
    Note: Note Notice

    Additional information, for example about differences in software versions.

    Tip
    Tip: Tip Notice

    Helpful information, like a guideline or a piece of practical advice.

1 Planning for SUSE Linux Enterprise Desktop

  • Filename: sled_planning.xml
  • ID: cha.sled_plan

This chapter is addressed mainly to corporate system administrators who face the task of having to deploy SUSE® Linux Enterprise Desktop at their site. Rolling out SUSE Linux Enterprise Desktop to an entire site should involve careful planning and consideration of the following questions:

For which purpose will the SUSE Linux Enterprise Desktop workstations be used?

Determine the purpose for which SUSE Linux Enterprise Desktop should be used and make sure that hardware and software with the ability to match these requirements are used. Consider testing your setup on a single machine before rolling it out to the entire site.

How many workstations should be installed?

Determine the scope of your deployment of SUSE Linux Enterprise Desktop. Depending on the number of installations planned, consider different approaches to the installation or even a mass installation using SUSE Linux Enterprises unique AutoYaST or KIWI technology.

How do you get software updates for your deployment?

All patches provided by SUSE for your product are available for download to registered users at http://download.suse.com/.

Do you need help for your local deployment?

SUSE provides training, support, and consulting for all topics pertaining to SUSE Linux Enterprise Desktop. Find more information about this at http://www.suse.com/products/desktop/.

1.1 Hardware Requirements

For a standard installation of SUSE Linux Enterprise Desktop, including the desktop environment and a wealth of applications, the following configuration is recommended:

  • Intel Pentium IV, 2.4 GHz or higher or any AMD64 or Intel 64 processor

  • 1–2 physical CPUs

  • 512 MB physical RAM or higher

  • 3 GB of available disk space or more

  • 1024 x 768 display resolution (or higher)

1.2 Reasons to Use SUSE Linux Enterprise Desktop

Let the following items guide you in your selection of SUSE Linux Enterprise Desktop and determining the purpose of the installed systems:

Wealth of Applications

SUSE Linux Enterprise Desktop's broad offer of software makes it appeal to both professional users in a corporate environment and to home users or users in smaller networks.

Ease of Use

SUSE Linux Enterprise Desktop comes with the enterprise-ready desktop environment GNOME. It enables users to comfortably adjust to a Linux system while maintaining their efficiency and productivity. To explore GNOME in detail, refer to the GNOME User Guide.

Support for Mobile Users

With the NetworkManager technology fully integrated into SUSE Linux Enterprise Desktop and its two desktop environments, mobile users will enjoy the freedom of easily joining and switching wired and wireless networks.

Seamless Integration into Existing Networks

SUSE Linux Enterprise Desktop was designed to be a versatile network citizen. It cooperates with various different network types:

Pure Linux Networks.  SUSE Linux Enterprise Desktop is a complete Linux client and supports all the protocols used in traditional Linux and Unix* environments. It integrates well with networks consisting of other SUSE Linux or SUSE Linux Enterprise machines. LDAP, NIS, and local authentication are supported.

Windows Networks.  SUSE Linux Enterprise Desktop supports Active Directory as an authentication source. It offers you all the advantages of a secure and stable Linux operating system plus convenient interaction with other Windows clients, as well as the means to manipulate your Windows user data from a Linux client. Explore this feature in detail in Chapter 7, Active Directory Support.

Application Security with AppArmor

SUSE Linux Enterprise Desktop enables you to secure your applications by enforcing security profiles tailor-made for your applications. To learn more about AppArmor, refer to http://www.suse.com/documentation/apparmor/.

Part I Installation Preparation

2 Installation on AMD64 and Intel 64

This chapter describes the steps necessary to prepare for the installation of SUSE Linux Enterprise Desktop on AMD64 and Intel 64 computers. It introduces the steps required to prepare for various installation methods. The list of hardware requirements provides an overview of systems supported by SUSE Linux Enterprise Server. Find information about available installation methods and several common known problems. Also learn how to control the installation, provide installation media, and boot with regular methods.

2 Installation on AMD64 and Intel 64

  • Filename: deployment_prep_x86.xml
  • ID: cha.x86
Abstract

This chapter describes the steps necessary to prepare for the installation of SUSE Linux Enterprise Desktop on AMD64 and Intel 64 computers. It introduces the steps required to prepare for various installation methods. The list of hardware requirements provides an overview of systems supported by SUSE Linux Enterprise Server. Find information about available installation methods and several common known problems. Also learn how to control the installation, provide installation media, and boot with regular methods.

2.1 System Requirements for Operating Linux

  • Filename: deployment_prep_x86_nsysreqs.xml
  • ID: sec.x86.sysreqs

The SUSE® Linux Enterprise Server operating system can be deployed on a wide range of hardware. It is impossible to list all the different combinations of hardware SUSE Linux Enterprise Server supports. However, to provide you with a guide to help you during the planning phase, the minimum requirements are presented here.

If you want to be sure that a given computer configuration will work, find out which platforms have been certified by SUSE. Find a list at https://www.suse.com/yessearch/.

2.1.1 Hardware for Intel 64 and AMD64

The Intel 64 and AMD64 architectures support the simple migration of x86 software to 64 bits. Like the x86 architecture, they constitute a value-for-money alternative.

CPU

All CPUs available on the market to date are supported.

Maximum Number of CPUs

The maximum number of CPUs supported by software design is 8192 for Intel 64 and AMD64. If you plan to use such a large system, verify with our hardware system certification Web page for supported devices, see https://www.suse.com/yessearch/.

Memory Requirements

A minimum of 512 MB of memory is required for a minimal installation. However, the minimum recommended is 1024 MB or 512 MB per CPU on multiprocessor computers. Add 150 MB for a remote installation via HTTP or FTP. Note that these values are only valid for the installation of the operating system—the actual memory requirement in production depends on the system's workload.

Hard Disk Requirements

The disk requirements depend largely on the installation selected and how you use your machine. Minimum requirements for different selections are:

System

Hard Disk Requirements

Minimal System

800 MB - 1GB

Minimal X Window System

1.4 GB

GNOME Desktop

3.5 GB

All patterns

8.5 GB

Using snapshots for virtualization

min. 8 GB

Boot Methods

The computer can be booted from a CD or a network. A special boot server is required to boot over the network. This can be set up with SUSE Linux Enterprise Server.

2.2 Installation Considerations

  • Filename: deployment_prep_x86_choose.xml
  • ID: sec.x86.prep.choose

This section encompasses many factors that need to be considered before installing SUSE Linux Enterprise Desktop on AMD64 and Intel 64 hardware.

2.2.1 Installation Type

SUSE Linux Enterprise Desktop is normally installed as an independent operating system. With the introduction of Virtualization, it is also possible to run multiple instances of SUSE Linux Enterprise Server on the same hardware. However, the installation of the VM Host Server is performed like a typical installation with some additional packages.

2.2.2 Boot Methods

Depending on the hardware used, the following boot methods are available for the first boot procedure (prior to the installation of SUSE Linux Enterprise Server).

Table 2.1: Boot Options

Boot Option

Use

CD or DVD drive

The simplest booting method. The system requires a locally-available CD-ROM or DVD-ROM drive for this.

Flash disks

Find the images required for creating boot disks on the first CD or DVD in the /boot directory. See also the README in the same directory. Booting from a USB memory stick is only possible if the BIOS of the machine supports this method.

PXE or bootp

Must be supported by the BIOS or by the firmware of the system used. This option requires a boot server in the network. This task can be handled by a separate SUSE Linux Enterprise Server.

Hard disk

SUSE Linux Enterprise Desktop can also be booted from hard disk. For this, copy the kernel (linux) and the installation system (initrd) from the /boot/loader directory of the first CD or DVD onto the hard disk and add an appropriate entry to the boot loader.

2.2.3 Installation Source

When installing SUSE Linux Enterprise Desktop, the actual installation data must be available in the network, on a hard disk partition, or on a local DVD. To install from the network, you need an installation server. To make the installation data available, set up any computer in a Unix or Linux environment as an NFS, HTTP, SMB, or FTP server. To make the installation data available from a Windows computer, release the data with SMB.

The installation source is particularly easy to select if you configure an SLP server in the local network. For more information, see Chapter 5, Setting Up the Server Holding the Installation Sources.

2.2.4 Installation Target

Most installations are to a local hard disk. Therefore, it is necessary for the hard disk controllers to be available to the installation system. If a special controller (like a RAID controller) needs an extra kernel module, provide a kernel module update disk to the installation system.

Other installation targets may be various types of block devices that provide sufficient disk space and speed to run an operating system. This includes network block devices like iSCSI or SAN. It is also possible to install on network file systems that offer the standard Unix permissions. However, it may be problematic to boot these, because they must be supported by the initramfs before the actual system can start. Such installations can be useful when you need to start the same system in different locations or you plan to use virtualization features like domain migration.

2.2.5 Different Installation Methods

SUSE Linux Enterprise Desktop offers several methods for controlling installation:

  • Installation on the console

  • Installation via serial console

  • Installation with AutoYaST

  • Installation with KIWI images

  • Installation via SSH

  • Installation with VNC

By default, the graphical console is used. If you have many similar computers to install, it is advisable to create an AutoYaST configuration file or a KIWI preload image and make this available to the installation process. See also the documentation for AutoYaST at https://www.suse.com/documentation/sles-12/book_autoyast/data/book_autoyast.html and KIWI at http://doc.opensuse.org/projects/kiwi/doc/.

2.3 Boot and Installation Media

  • Filename: deployment_prep_x86_makeavail.xml
  • ID: sec.instserver

When installing the system, the media for booting and for installing the system may be different. All combinations of supported media for booting and installing may be used.

2.3.1 Boot Media

Booting a computer depends on the capabilities of the hardware used and the availability of media for the respective boot option.

Booting from DVD

This is the most common possibility of booting a system. It is straightforward for most computer users, but requires a lot of interaction for every installation process.

Booting from a USB Hard Disk

Depending on the hardware used, it is possible to boot from a USB hard disk. The respective media must be created as described in Section 3.2.1, “PC (AMD64/Intel 64/ARM AArch64): System Start-up”.

Booting from the Network

You can only boot a computer directly from the network if this is supported by the computer's firmware or BIOS. This booting method requires a boot server that provides the needed boot images over the network. The exact protocol depends on your hardware. Commonly you need several services, such as TFTP and DHCP or PXE boot. If you need a boot server, also read Section 7.1.3, “Remote Installation via VNC—PXE Boot and Wake on LAN”.

2.3.2 Installation Media

The installation media contain all the necessary packages and meta information that is necessary to install a SUSE Linux Enterprise Desktop. These must be available to the installation system after booting for installation. Several possibilities for providing the installation media to the system are available with SUSE Linux Enterprise Desktop.

Installation from DVD

All necessary data is delivered on the boot media. Depending on the selected installation, a network connection or add-on media may be necessary.

Networked Installation

If you plan to install several systems, providing the installation media over the network makes things a lot easier. It is possible to install from many common protocols, such as NFS, HTTP, FTP, or SMB. For more information on how to run such an installation, refer to Chapter 7, Remote Installation.

2.4 Installation Procedure

  • Filename: deployment_prep_x86_workflow.xml
  • ID: sec.x86.prep.worklfow

This section offers an overview of the steps required for the complete installation of SUSE® Linux Enterprise Desktop in the required mode. Part II, “The Installation Workflow” contains a full description of how to install and configure the system with YaST.

2.4.1 Booting from a Local Interchangeable Drive

DVD-ROM and USB storage devices can be used for installation purposes. Adjust your computer to your needs:

  1. Make sure that the drive is entered as a bootable drive in the BIOS.

  2. Insert the boot medium in the drive and start the boot procedure.

  3. The installation boot menu of SUSE Linux Enterprise Desktop allows transferring different parameters to the installation system. See also Section 7.2.2, “Using Custom Boot Options”. If the installation should be performed over the network, specify the installation source here.

  4. If unexpected problems arise during installation, use safe settings to boot.

2.4.2 Installing over the Network

An installation server is required to perform the installation by using a network source. The procedure for installing this server is outlined in Chapter 5, Setting Up the Server Holding the Installation Sources.

If you have an SLP server, select SLP as the installation source in the first boot screen. During the boot procedure, select which of the available installation sources to use.

If the DVD is available on the network, use it as an installation source. In this case, specify the parameter install=<URL> with suitable values at the boot prompt. Find a more detailed description of this parameter in Section 7.2.2, “Using Custom Boot Options”.

2.5 Controlling the Installation

  • Filename: deployment_prep_x86_boot.xml
  • ID: sec.prep.boot

Control the installation in one of several ways. The method most frequently used is to install SUSE® Linux Enterprise Desktop from the computer console. Other options are available for different situations.

2.5.1 Installation on the Computer Console

The simplest way to install SUSE Linux Enterprise Desktop is using the computer console. With this method, a graphical installation program guides you through the installation. This installation method is discussed in detail in Chapter 3, Installation with YaST.

You can still perform the installation on the console without a working graphics mode. The text-based installation program offers the same functionality as the graphical version. Find some hints about navigation in this mode in Section 5.1, “Navigation in Modules”.

2.5.2 Installation Using a Serial Console

For this installation method, you need a second computer that is connected by a null modem cable to the computer on which to install SUSE Linux Enterprise Desktop. Depending on the hardware, even the firmware or BIOS of the computer may already be accessible to the serial console. If this is possible, you can carry out the entire installation using this method. To activate the serial console installation, specify the additional console=ttyS0 parameter at the boot prompt. This should be done after the boot process has completed and before the installation system starts.

On most computers, there are two serial interfaces, ttyS0 and ttyS1. For the installation, you need a terminal program like minicom or screen. To initiate the serial connection, launch the screen program in a local console by entering the following command:

screen /dev/ttyS0 9600

This means that screen listens to the first serial port with a baud rate of 9600. From this point on, the installation proceeds similarly to the text-based installation over this terminal.

2.5.3 Installation with SSH

If you do not have direct access to the machine and the installation initiated from a management console, you can control the entire installation process over the network. To do this, enter the parameters ssh=1 and ssh.password=SECRET at the boot prompt. An SSH daemon is then launched in the system and you can log in as user root with the password SECRET.

To connect, use ssh -X. X-Forwarding over SSH is supported, if you have a local X server available. Otherwise, YaST provides a text interface over ncurses. YaST then guides you through the installation. This procedure is described in detail in Section 7.1.5, “Simple Remote Installation via SSH—Dynamic Network Configuration”.

If you do not have a DHCP server available in your local network, manually assign an IP address to the installation system. Do this by entering the option HostIP=IPADDR at the boot prompt.

2.5.4 Installation over VNC

If you do not have direct access to the system, but want a graphical installation, install SUSE Linux Enterprise Desktop over VNC. This method is described in detail in Section 7.3.1, “VNC Installation”.

As suitable VNC clients are also available for other operating systems, such as Microsoft Windows and mac OS, the installation can also be controlled from computers running those operating systems.

2.5.5 Installation with AutoYaST

If you need to install SUSE Linux Enterprise Desktop on several computers with similar hardware, it is recommended you perform the installations using AutoYaST. In this case, start by installing one SUSE Linux Enterprise Desktop and use this to create the necessary AutoYaST configuration files.

AutoYaST is extensively documented at https://www.suse.com/documentation/sles-12/book_autoyast/data/book_autoyast.html .

2.6 Dealing with Boot and Installation Problems

  • Filename: x86_inst_problem.xml
  • ID: sec.bootproblem

Prior to delivery, SUSE® Linux Enterprise Desktop is subjected to an extensive test program. Despite this, problems occasionally occur during boot or installation.

2.6.1 Problems Booting

Boot problems may prevent the YaST installer from starting on your system. Another symptom is when your system does not boot after the installation has been completed.

Installed System Boots, Not Media

Change your computer's firmware or BIOS so that the boot sequence is correct. To do this, consult the manual for your hardware.

The Computer Hangs

Change the console on your computer so that the kernel outputs are visible. Be sure to check the last outputs. This is normally done by pressing CtrlAltF10. If you cannot resolve the problem, consult the SUSE Linux Enterprise Desktop support staff. To log all system messages at boot time, use a serial connection as described in Section 2.5, “Controlling the Installation”.

Boot Disk

The boot disk is a useful interim solution if you have difficulties setting the other configurations or if you want to postpone the decision regarding the final boot mechanism. For more details on creating boot disks, see grub2-mkrescue.

Virus Warning after Installation

There are BIOS variants that check the structure of the boot sector (MBR) and erroneously display a virus warning after the installation of GRUB 2. Solve this problem by entering the BIOS and looking for corresponding adjustable settings. For example, switch off virus protection. You can switch this option back on again later. It is unnecessary, however, if Linux is the only operating system you use.

2.6.2 Problems Installing

If an unexpected problem occurs during installation, information is needed to determine the cause of the problem. Use the following directions to help with troubleshooting:

  • Check the outputs on the various consoles. You can switch consoles with the key combination CtrlAltFn. For example, obtain a shell in which to execute various commands by pressing CtrlAltF2.

  • Try launching the installation with Safe Settings (press F5 on the installation screen and choose Safe Settings). If the installation works without problems in this case, there is an incompatibility that causes either ACPI or APIC to fail. In some cases, a BIOS or firmware update fixes this problem.

  • Check the system messages on a console in the installation system by entering the command dmesg -T.

2.6.3 Redirecting the Boot Source to the Boot DVD

To simplify the installation process and avoid accidental installations, the default setting on the installation DVD for SUSE Linux Enterprise Desktop is that your system is booted from the first hard disk. At this point, an installed boot loader normally takes over control of the system. This means that the boot DVD can stay in the drive during an installation. To start the installation, choose one of the installation possibilities in the boot menu of the media.

Part II The Installation Workflow

3 Installation with YaST

Install your SUSE® Linux Enterprise Desktop system with YaST, the central tool for installation and configuration of your system. YaST guides you through the installation process of your system. If you are a first-time user of SUSE Linux Enterprise Desktop, you might want to follow the default YaST proposals in most parts, but you can also adjust the settings as described here to fine-tune your system according to your preferences. Help for each installation step is provided by clicking Help.

During the installation process, YaST analyzes both your current system settings and your hardware components. Based on this analysis your system will be set up with a basic configuration including networking (provided the system could be configured using DHCP). To fine-tune the system after the installation has finished, start YaST from the installed system.

4 Cloning Disk Images

If SUSE Linux Enterprise Desktop is installed in a virtualized environment, cloning an existing installation may be the fastest way to deploy further machines. SUSE Linux Enterprise Desktop provides a script to clean up configuration that is unique to each installation. With the introduction of syst…

3 Installation with YaST

  • Filename: inst_yast2.xml
  • ID: cha.inst
Abstract

Install your SUSE® Linux Enterprise Desktop system with YaST, the central tool for installation and configuration of your system. YaST guides you through the installation process of your system. If you are a first-time user of SUSE Linux Enterprise Desktop, you might want to follow the default YaST proposals in most parts, but you can also adjust the settings as described here to fine-tune your system according to your preferences. Help for each installation step is provided by clicking Help.

During the installation process, YaST analyzes both your current system settings and your hardware components. Based on this analysis your system will be set up with a basic configuration including networking (provided the system could be configured using DHCP). To fine-tune the system after the installation has finished, start YaST from the installed system.

3.1 Choosing the Installation Method

After having selected the installation medium, determine the suitable installation method and boot option that best matches your needs:

Installing from the SUSE Linux Enterprise Desktop Media (DVD, USB)

Choose this option if you want to perform a stand-alone installation and do not want to rely on a network to provide the installation data or the boot infrastructure. The installation proceeds exactly as outlined in Section 3.3, “Steps of the Installation”.

Installing from the Live CD

To install from a Live CD, boot the live system from CD. In the running system, launch the installation routine by clicking the Install icon on the desktop. The installation will be executed in a window on the desktop. It is not possible to update an existing system with a Live CD, you can only perform an installation from scratch.

Installing from a Network Server

Choose this option if you have an installation server available in your network or want to use an external server as the source of your installation data. This setup can be configured to boot from physical media (flash disk, CD/DVD, or hard disk) or configured to boot via network using PXE/BOOTP. Refer to Section 3.2, “System Start-up for Installation” for details.

The installation program configures the network connection with DHCP and retrieves the location of the network installation source from the OpenSLP server. If no DHCP is available, choose F4 Source › Network Config › Manual and enter the network data. On EFI systems modify the network boot parameters as described in Section 3.2.1.2, “The Boot Screen on Machines Equipped with UEFI”.

Installing from an SLP Server.  If your network setup supports OpenSLP and your network installation source has been configured to announce itself via SLP (described in Chapter 5, Setting Up the Server Holding the Installation Sources), boot the system, press F4 in the boot screen and select SLP from the menu. On EFI systems set the install parameter to install=slp:/ as described in Section 3.2.1.2, “The Boot Screen on Machines Equipped with UEFI”.

Installing from a Network Source without SLP.  If your network setup does not support OpenSLP for the retrieval of network installation sources, boot the system and press F4 in the boot screen to select the desired network protocol (NFS, HTTP, FTP, or SMB/CIFS) and provide the server's address and the path to the installation media. On EFI systems modify the boot parameter install= as described in Section 3.2.1.2, “The Boot Screen on Machines Equipped with UEFI”.

Installing as a SUSE Linux Enterprise Server Extension

Choose this option if you want to install SUSE Linux Enterprise Desktop on top of SUSE Linux Enterprise Server. Install SUSE Linux Enterprise Server, register at the SUSE Customer Center and choose the SUSE Linux Enterprise Workstation Extension on the Extension Selection screen.

3.2 System Start-up for Installation

The way the system is started for the installation depends on the architecture—system start-up is different for PC (AMD64/Intel 64) or mainframe, for example. If you install SUSE Linux Enterprise Desktop as a VM Guest on a KVM or Xen hypervisor, follow the instructions for the AMD64/Intel 64 architecture.

3.2.1 PC (AMD64/Intel 64/ARM AArch64): System Start-up

SUSE Linux Enterprise Desktop supports several boot options from which you can choose, depending on the hardware available and on the installation scenario you prefer. Booting from the SUSE Linux Enterprise Desktop media is the most straightforward option, but special requirements might call for special setups:

Table 3.1: Boot Options

Boot Option

Description

DVD

This is the easiest boot option. This option can be used if the system has a local DVD-ROM drive that is supported by Linux.

Flash Disks (USB Mass Storage Device)

In case your machine is not equipped with an optical drive, you can boot the installation image from a flash disk. To create a bootable flash disk, you need to copy either the DVD or the Mini CD ISO image to the device using the dd command (the flash disk must not be mounted, all data on the device will be erased):

dd if=PATH_TO_ISO_IMAGE of=USB_STORAGE_DEVICE bs=4M
Important
Important: Compatibility

Note that booting from a USB Mass Storage Device is not supported on UEFI machines and on the POWER architecture.

PXE or BOOTP

Booting over the network must be supported by the system's BIOS or firmware, and a boot server must be available in the network. This task can also be handled by another SUSE Linux Enterprise Desktop system. Refer to Chapter 7, Remote Installation for more information.

Hard Disk

SUSE Linux Enterprise Desktop installation can also be booted from the hard disk. To do this, copy the kernel (linux) and the installation system (initrd) from the directory /boot/ARCHITECTURE/ on the installation media to the hard disk and add an appropriate entry to the existing boot loader of a previous SUSE Linux Enterprise Desktop installation.

Tip
Tip: Booting from DVD on UEFI Machines

DVD1 can be used as a boot medium for machines equipped with UEFI (Unified Extensible Firmware Interface). Refer to your vendor's documentation for specific information. If booting fails, try to enable CSM (Compatibility Support Module) in your firmware.

Note
Note: Add-on Product Installation Media

Media for add-on products (extensions or third-party products) cannot be used as stand-alone installation media. They can either be embedded as additional installation sources during the installation process (see Section 3.8, “Extension Selection”) or be installed from the running system using the YaST Add-on Products module (see Chapter 11, Installing Modules, Extensions, and Third Party Add-On Products for details).

3.2.1.1 The Boot Screen on Machines Equipped with Traditional BIOS

The boot screen displays several options for the installation procedure. Boot from Hard Disk boots the installed system and is selected by default, because the CD is often left in the drive. Select one of the other options with the arrow keys and press Enter to boot it. The relevant options are:

Installation

The normal installation mode. All modern hardware functions are enabled. In case the installation fails, see F5Kernel for boot options that disable potentially problematic functions.

Upgrade

Perform a system upgrade. For more information refer to Chapter 16, Upgrading SUSE Linux Enterprise.

Rescue System

Starts a minimal Linux system without a graphical user interface. For more information, see Section 34.6.2, “Using the Rescue System”. This option is not available on LiveCDs.

Check Installation Media

This option is only available when you install from media created from downloaded ISOs. In this case it is recommended to check the integrity of the installation medium. This option starts the installation system before automatically checking the media. In case the check was successful, the normal installation routine starts. If a corrupt media is detected, the installation routine aborts.

Warning
Warning: Failure of Media Check

If the media check fails, your medium is damaged. Do not continue the installation because installation may fail or you may lose your data. Replace the broken medium and restart the installation process.

Memory Test

Tests your system RAM using repeated read and write cycles. Terminate the test by rebooting. For more information, see Section 34.2.4, “Fails to Boot”. This option is not available on the Live CDs.

The Boot Screen on Machines with a Traditional BIOS
Figure 3.1: The Boot Screen on Machines with a Traditional BIOS

Use the function keys shown at the bottom of the screen to change the language, screen resolution, installation source or to add an additional driver from your hardware vendor:

F1Help

Get context-sensitive help for the active element of the boot screen. Use the arrow keys to navigate, Enter to follow a link, and Esc to leave the help screen.

F2Language

Select the display language and a corresponding keyboard layout for the installation. The default language is English (US).

F3Video Mode

Select various graphical display modes for the installation. By Default the video resolution is automatically determined using KMS (Kernel Mode Setting). If this setting does not work on your system, choose No KMS and, optionally, specify vga=ask on the boot command line to get prompted for the video resolution. Choose Text Mode if the graphical installation causes problems.

F4Source

Normally, the installation is performed from the inserted installation medium. Here, select other sources, like FTP or NFS servers. If the installation is deployed on a network with an SLP server, select an installation source available on the server with this option. Find information about setting up an installation server with SLP at Chapter 5, Setting Up the Server Holding the Installation Sources.

F5Kernel

If you encounter problems with the regular installation, this menu offers to disable a few potentially problematic functions. If your hardware does not support ACPI (advanced configuration and power interface) select No ACPI to install without ACPI support. No local APIC disables support for APIC (Advanced Programmable Interrupt Controllers) which may cause problems with some hardware. Safe Settings boots the system with the DMA mode (for CD/DVD-ROM drives) and power management functions disabled.

If you are not sure, try the following options first: Installation—ACPI Disabled or Installation—Safe Settings. Experts can also use the command line (Boot Options) to enter or change kernel parameters.

F6Driver

Press this key to notify the system that you have an optional driver update for SUSE Linux Enterprise Desktop. With File or URL, load drivers directly before the installation starts. If you select Yes, you are prompted to insert the update disk at the appropriate point in the installation process.

Tip
Tip: Getting Driver Update Disks

Driver updates for SUSE Linux Enterprise are provided at http://drivers.suse.com/. These drivers have been created via the SUSE SolidDriver Program.

3.2.1.2 The Boot Screen on Machines Equipped with UEFI

UEFI (Unified Extensible Firmware Interface) is a new industry standard which replaces and extends the traditional BIOS. The latest UEFI implementations contain the Secure Boot extension, which prevents booting malicious code by only allowing signed boot loaders to be executed. See Chapter 12, UEFI (Unified Extensible Firmware Interface) for more information.

The boot manager GRUB 2, used to boot machines with a traditional BIOS, does not support UEFI, therefore GRUB 2 is replaced with GRUB 2 for EFI. If Secure Boot is enabled, YaST will automatically select GRUB 2 for EFI for installation. From an administrative and user perspective, both boot manager implementations behave the same and are called GRUB 2 in the following.

Tip
Tip: UEFI and Secure Boot are Supported by Default

The installation routine of SUSE Linux Enterprise Desktop automatically detects if the machine is equipped with UEFI. All installation sources also support Secure Boot. If an EFI system partition already exists on dual boot machines (from a Microsoft Windows 8 installation, for example), it will automatically be detected and used. Partition tables will be written as GPT on UEFI systems.

Warning
Warning: Using Non-Inbox Drivers with Secure Boot

There is no support for adding non-inbox drivers (that is, drivers that do not come with SLE) during installation with Secure Boot enabled. The signing key used for SolidDriver/PLDP is not trusted by default.

To solve this problem, it is necessary to either add the needed keys to the firmware database via firmware/system management tools before the installation or to use a bootable ISO that will enroll the needed keys in the MOK list at first boot. For more information, see Section 12.1, “Secure Boot”.

The boot screen displays several options for the installation procedure. Change the selected option with the arrow keys and press Enter to boot it. The relevant options are:

Installation

The normal installation mode.

Upgrade

Perform a system upgrade. For more information refer to Chapter 16, Upgrading SUSE Linux Enterprise.

Rescue System

Starts a minimal Linux system without a graphical user interface. For more information, see Section 34.6.2, “Using the Rescue System”. This option is not available on Live CDs.

Check Installation Media

This option is only available when you install from media created from downloaded ISOs. In this case it is recommended to check the integrity of the installation medium. This option starts the installation system before automatically checking the media. In case the check was successful, the normal installation routine starts. If a corrupt media is detected, the installation routine aborts.

The Boot Screen on Machines with UEFI
Figure 3.2: The Boot Screen on Machines with UEFI

GRUB 2 for EFI on SUSE Linux Enterprise Desktop does not support a boot prompt or function keys for adding boot parameters. By default, the installation will be started with American English and the boot media as the installation source. A DHCP lookup will be performed to configure the network. To change these defaults or to add additional boot parameters you need to edit the respective boot entry. Highlight it using the arrow keys and press E. See the on-screen help for editing hints (note that only an English keyboard is available now). The Installation entry will look similar to the following:

setparams 'Installation'

    set gfxpayload=keep
    echo 'Loading kernel ...'
    linuxefi /boot/x86_64/loader/linux splash=silent
    echo 'Loading initial ramdisk ...'
    initrdefi /boot/x86_64/loader/initrd

Add space-separated parameters to the end of the line starting with linuxefi. To boot the edited entry, press F10. If you access the machine via serial console, press Esc0. A complete list of parameters is available at http://en.opensuse.org/Linuxrc. The most important ones are:

Table 3.2: Installation Sources

CD/DVD (default)

install=cd:/

Hard disk

install=hd:/?device=sda/PATH_TO_ISO

SLP

install=slp:/

FTP

install=ftp://ftp.example.com/PATH_TO_ISO

HTTP

install=http://www.example.com/PATH_TO_ISO

NFS

install=nfs:/PATH_TO_ISO

SMB / CIFS

install=smb://PATH_TO_ISO

Table 3.3: Network Configuration

DHCP (default)

netsetup=dhcp

Prompt for Parameters

netsetup=hostip,netmask,gateway,nameserver

Host IP address

hostip=192.168.2.100

hostip=192.168.2.100/24

Netmask

netmask=255.255.255.0

Gateway

gateway=192.168.5.1

Name Server

nameserver=192.168.1.116

nameserver=192.168.1.116,192.168.1.118

Domain Search Path

domain=example.com

Table 3.4: Miscellaneous

Driver Updates: Prompt

dud=1

Driver Updates: URL

dud=ftp://ftp.example.com/PATH_TO_DRIVER

dud=http://www.example.com/PATH_TO_DRIVER

Installation Language

Language=LANGUAGE

Supported values for LANGUAGE are, among others, cs_CZ, de_DE, es_ES, fr_FR, ja_JP, pt_BR, pt_PT, ru_RU, zh_CN, and zh_TW.

Kernel: No ACPI

acpi=off

Kernel: No Local APIC

noapic

Video: Disable KMS

nomodeset

Video: Start Installer in Text Mode

Textmode=1

3.2.2 Boot Parameters for Advanced Setups

To configure access to a local SMT or supportconfig server for the installation, you can specify boot parameters to set up these services during installation. The same applies if you need IPv6 support during the installation.

3.2.2.1 Providing Data to Access an SMT Server

By default, updates for SUSE Linux Enterprise Desktop are delivered by the SUSE Customer Center. If your network provides a so called SMT server to provide a local update source, you need to equip the client with the server's URL. Client and server communicate solely via HTTPS protocol, therefore you also need to enter a path to the server's certificate if the certificate was not issued by a certificate authority.

Note
Note: Non-Interactive Installation Only

Providing parameters for accessing an SMT server is only needed for non-interactive installations. During an interactive installation the data can be provided during the installation (see Section 3.7, “SUSE Customer Center Registration” for details).

regurl

URL of the SMT server. This URL has a fixed format https://FQN/center/regsvc/. FQN needs to be a fully qualified host name of the SMT server. Example:

regurl=https://smt.example.com/center/regsvc/
regcert

Location of the SMT server's certificate. Specify one of the following locations:

URL

Remote location (HTTP, HTTPS or FTP) from which the certificate can be downloaded. Example:

regcert=http://smt.example.com/smt-ca.crt
local path

Absolute path to the certificate on the local machine. Example:

regcert=/data/inst/smt/smt-ca.cert
Interactive

Use ask to open a pop-up menu during the installation where you can specify the path to the certificate. Do not use this option with AutoYaST. Example

regcert=ask
Deactivate certificate installation

Use done if the certificate will be installed by an add-on product, or if you are using a certificate issued by an official certificate authority. For example:

regcert=done
Warning
Warning: Beware of Typing Errors

Make sure the values you enter are correct. If regurl has not been specified correctly, the registration of the update source will fail. If a wrong value for regcert has been entered, you will be prompted for a local path to the certificate.

In case regcert is not specified, it will default to http://FQN/smt.crt with FQN being the name of the SMT server.

3.2.2.2 Configuring an Alternative Data Server for supportconfig

The data that supportconfig (see Chapter 33, Gathering System Information for Support for more information) gathers is sent to the SUSE Customer Center by default. It is also possible to set up a local server to collect this data. If such a server is available on your network, you need to set the server's URL on the client. This information needs to be entered at the boot prompt.

supporturl URL of the server. The URL has the format http://FQN/Path/, where FQN is the fully qualified host name of the server and Path is the location on the server. For example:

supporturl=http://support.example.com/supportconfig/data/

3.2.2.3 Using IPv6 During the Installation

By default you can only assign IPv4 network addresses to your machine. To enable IPv6 during installation, enter one of the following parameters at the boot prompt:

Accept IPv4 and IPv6
ipv6=1
Accept IPv6 only
ipv6only=1

3.2.2.4 Using a Proxy During the Installation

In networks enforcing the usage of a proxy server for accessing remote Web sites, registration during installation is only possible when configuring a proxy server.

To use a proxy during the installation, press F4 on the boot screen and set the required parameters in the HTTP Proxy dialog. Alternatively provide the kernel parameter proxy at the boot prompt:

l>proxy=http://USER:PASSWORD@proxy.example.com:PORT

Specifying USER and PASSWORD is optional—if the server allows anonymous access, the following data is sufficient: http://proxy.example.com:PORT.

3.2.2.5 Enabling SELinux Support

Enabling SELinux upon installation start-up enables you to configure it after the installation has been finished without having to reboot. Use the following parameters:

security=selinux selinux=1

3.2.2.6 Enabling the Installer Self-Update

During installation and upgrade, YaST can update itself as described in Section 3.4, “Installer Self-Update” to solve potential bugs discovered after release. The self_update parameter can be used to modify the behavior of this feature.

To enable the installer self-update, set the parameter to 1:

self_update=1

To use a user-defined repository, specify a URL:

self_update=https://updates.example.com/

3.3 Steps of the Installation

The interactive installation of SUSE Linux Enterprise Desktop split into several steps is listed below.

After starting the installation, SUSE Linux Enterprise Desktop loads and configures a minimal Linux system to run the installation procedure. To view the boot messages and copyright notices during this process, press Esc. On completion of this process, the YaST installation program starts and displays the graphical installer.

Tip
Tip: Installation Without a Mouse

If the installer does not detect your mouse correctly, use →| for navigation, arrow keys to scroll, and Enter to confirm a selection. Various buttons or selection fields contain a letter with an underscore. Use AltLetter to select a button or a selection directly instead of navigating there with →|.

3.4 Installer Self-Update

During the installation and upgrade process, YaST is able to update itself to solve bugs in the installer that were discovered after the release. This functionality is enabled by default; to disable it, set the boot parameter self_update to 0. For more information, see Section 3.2.2.6, “Enabling the Installer Self-Update”.

Although this feature was designed to run without user intervention, it is worth knowing how it works. If you are not interested, you can jump directly to Section 3.5, “Language, Keyboard and License Agreement” and skip the rest of this section.

Tip
Tip: Language Selection

The installer self-update is executed before the language selection step. This means that progress and errors which happen during this process are displayed in English by default.

To use another language for this part of the installer, press F2 in the DVD boot menu and select the language from the list. Alternatively, use the language boot parameter (for example, language=de_DE).

3.4.1 Self-Update Process

The process can be broken down into two different parts:

  1. Determine the update repository location.

  2. Download and apply the updates to the installation system.

3.4.1.1 Determining the Update Repository Location

Installer Self-Updates are distributed as regular RPM packages via a dedicated repository, so the first step is to find out the repository URL.

Important
Important: Installer Self-Update Repository Only

No matter which of the following options you use, only the installer self-update repository URL is expected, for example:

self_update=https://www.example.com/my_installer_updates/

Do not supply any other repository URL—for example the URL of the software update repository.

YaST will try the following sources of information:

  1. The self_update boot parameter. (For more details, see Section 3.2.2.6, “Enabling the Installer Self-Update”.) If you specify a URL, it will take precedence over any other method.

  2. The /general/self_update_url profile element in case you are using AutoYaST.

  3. A registration server. YaST will query the registration server for the URL. The server to be used is determined in the following order:

    1. By evaluating the regurl boot parameter (Section 3.2.2.1, “Providing Data to Access an SMT Server”).

    2. By evaluating the /suse_register/reg_server profile element if you are using AutoYaST.

    3. By performing an SLP lookup. If an SLP server is found, YaST will ask you whether it should be used because there is no authentication involved and everybody on the local network could announce a registration server.

    4. By querying the SUSE Customer Center.

  4. If none of the previous attempts worked, the fallback URL (defined in the installation media) will be used.

3.4.1.2 Downloading and Applying the Updates

When the updates repository is determined, YaST will check whether an update is available. If so, all the updates will be downloaded and applied to the installation system.

Finally, YaST will be restarted to load the new version and the welcome screen will be shown. If no updates were available, the installation will continue without restarting YaST.

Note
Note: Update Integrity

Update signatures will be checked to ensure integrity and authorship. If a signature is missing or invalid, you will be asked whether you want to apply the update.

3.4.2 Networking during Self-Update

To download installer updates, YaST needs network access. By default, it tries to use DHCP on all network interfaces. If there is a DHCP server in the network, it will work automatically.

If you need a static IP setup, you can use the ifcfg boot argument. For more details, see the linuxrc documentation at https://en.opensuse.org/Linuxrc.

3.4.3 Custom Self-Update Repositories

YaST can use a user-defined repository instead of the official one by specifying a URL through the self_update boot option. However, the following points should be considered:

  • Only HTTP/HTTPS and FTP repositories are supported.

  • Only RPM-MD repositories are supported (required by SMT).

  • Packages are not installed in the usual way: They are uncompressed only and no scripts are executed.

  • No dependency checks are performed. Packages are installed in alphabetical order.

  • Files from the packages override the files from the original installation media. This means that the update packages might not need to contain all files, only files that have changed. Unchanged files are omitted to save memory and download bandwidth.

Note
Note: Only One Repository

Currently, it is not possible to use more than one repository as source for installer self-updates.

3.5 Language, Keyboard and License Agreement

Start the installation of SUSE Linux Enterprise Desktop by choosing your language. Changing the language will automatically preselect a corresponding keyboard layout. Override this proposal by selecting a different keyboard layout from the drop-down box. The language selected here is also used to assume a time zone for the system clock. This setting can be modified later in the installed system as described in Chapter 14, Changing Language and Country Settings with YaST.

Read the license agreement that is displayed beneath the language and keyboard selection thoroughly. Use License Translations to access translations. If you agree to the terms, check I Agree to the License Terms and click Next to proceed with the installation. If you do not agree to the license agreement, you cannot install SUSE Linux Enterprise Desktop; click Abort to terminate the installation.

Language, Keyboard and License Agreement
Figure 3.3: Language, Keyboard and License Agreement

3.6 Network Settings

After booting into the installation, the installation routine is set up. During this setup, an attempt to configure at least one network interface with DHCP is made. In case this attempt fails, the Network Settings dialog launches. Choose a network interface from the list and click Edit to change its settings. Use the tabs to configure DNS and routing. See Section 17.4, “Configuring a Network Connection with YaST” for more details.

In case DHCP was successfully configured during installation setup, you can also access this dialog by clicking Network Configuration at the SUSE Customer Center Registration step. It lets you change the automatically provided settings.

Note
Note: Network Interface Configured via linuxrc

If at least one network interface is configured via linuxrc, automatic DHCP configuration is disabled and configuration from linuxrc is imported and used.

Network Settings
Figure 3.4: Network Settings
Tip
Tip: Accessing Network Storage or Local RAID

To access a SAN or a local RAID during the installation, you can use the libstorage command line client for this purpose:

  1. Switch to a console with CtrlAltF2.

  2. Install the libstoragemgmt extension by running extend libstoragemgmt.

  3. Now you have access to the lsmcli command. For more information, run lsmcli --help.

  4. To return to the installer, press AltF7

Supported are Netapp Ontap, all SMI-S compatible SAN providers, and LSI MegaRAID.

3.7 SUSE Customer Center Registration

To get technical support and product updates, you need to register and activate your product with the SUSE Customer Center. Registering SUSE Linux Enterprise Desktop now grants you immediate access to the update repository. This enables you to install the system with the latest updates and patches available. If you are offline or want to skip this step, select Skip Registration. You can register your system at any time later from the installed system.

Note
Note: Network Configuration

After booting into the installation, the installation routine is set up. During this setup, an attempt to configure all network interfaces with DHCP is made. If DHCP is not available or you want to modify the network configuration, click Network Configuration in the upper right corner of the SUSE Customer Center Registration screen. The YaST module Network Settings opens. See Section 17.4, “Configuring a Network Connection with YaST” for details.

SUSE Customer Center Registration
Figure 3.5: SUSE Customer Center Registration

To register your system, provide the E-mail address associated with the SUSE account you or your organization uses to manage subscriptions. In case you do not have a SUSE account yet, go to the SUSE Customer Center home page (https://scc.suse.com/) to create one.

Enter the Registration Code you received with your copy of SUSE Linux Enterprise Desktop. YaST can also read registration codes from a USB storage device such as a flash disk. For details, see Section 3.7.1, “Loading Registration Codes from USB Storage”.

Proceed with Next to start the registration process. If one or more local registration servers are available on your network, you can choose one of them from a list. By default, SUSE Linux Enterprise Desktop is registered at the SUSE Customer Center. If your local registration server was not discovered automatically, choose Cancel, select Register System via local SMT Server and enter the URL of the server. Restart the registration by choosing Next again.

During the registration, the online update repositories will be added to your installation setup. When finished, you can choose whether to install the latest available package versions from the update repositories. This ensures that SUSE Linux Enterprise Desktop is installed with the latest security updates available. If you choose No, all packages will be installed from the installation media. Proceed with Next.

If the system was successfully registered during installation, YaST will disable repositories from local installation media such as CD/DVD or flash disks when the installation has been completed. This prevents problems if the installation source is no longer available and ensures that you always get the latest updates from the online repositories.

Tip
Tip: Release Notes

From this point on, the Release Notes can be viewed from any screen during the installation process by selecting Release Notes.

3.7.1 Loading Registration Codes from USB Storage

To make the registration more convenient, you can also store your registration codes on a USB storage device such as a flash disk. YaST will automatically pre-fill the corresponding text box. This is particularly useful when testing the installation or if you need to register many systems or extensions.

Note
Note: Limitations

Currently flash disks are only scanned during installation or upgrade, but not when registering a running system.

Create a file named regcodes.txt or regcodes.xml on the USB disk. If both are present, the XML takes precedence.

In that file, identify the product with the name returned by zypper search --type product and assign it a registration code as follows:

Example 3.1: regcodes.txt
SLES    cc36aae1
SLED    309105d4

sle-we  5eedd26a
sle-live-patching 8c541494
Example 3.2: regcodes.xml
<?xml version="1.0"?>
<profile xmlns="http://www.suse.com/1.0/yast2ns"
 xmlns:config="http://www.suse.com/1.0/configns">
  <suse_register>
    <addons config:type="list">
      <addon>
<name>SLES</name>
<reg_code>cc36aae1</reg_code>
      </addon>
      <addon>
<name>SLED</name>
<reg_code>309105d4</reg_code>
      </addon>
      <addon>
<name>sle-we</name>
<reg_code>5eedd26a</reg_code>
      </addon>
      <addon>
<name>sle-live-patching</name>
<reg_code>8c541494</reg_code>
      </addon>
    </addons>
  </suse_register>
</profile>

Note that SLES and SLED are not extensions, but listing them as add-ons allows for combining several base product registration codes in a single file.

3.8 Extension Selection

If you have successfully registered your system in the previous step, a list of available modules and extensions based on SUSE Linux Enterprise Desktop is shown. Otherwise this configuration step is skipped. It is also possible to add modules and extensions from the installed system, see Chapter 11, Installing Modules, Extensions, and Third Party Add-On Products for details.

The list contains free modules for SUSE Linux Enterprise Desktop, such as the SUSE Linux Enterprise SDK and extensions requiring a registration key that is liable for costs. Click an entry to see its description. Select a module or extension for installation by activating its check mark. This will add its repository from the SUSE Customer Center server to your installation—no additional installation sources are required. Furthermore the installation pattern for the module or extension is added to the default installation to ensure it gets installed automatically.

The amount of available extensions and modules depends on the registration server. A local registration server may only offer update repositories and no additional extensions.

Tip
Tip: Modules

Modules are fully supported parts of SUSE Linux Enterprise Desktop with a different life cycle. They have a clearly defined scope and are delivered via online channel only. Registering at the SUSE Customer Center is a prerequisite for being able to subscribe to these channels.

Tip
Tip: SUSE Linux Enterprise Desktop

As of SUSE Linux Enterprise 12, SUSE Linux Enterprise Desktop is not only available as a separate product, but also as a workstation extension for SUSE Linux Enterprise Server. If you register at the SUSE Customer Center, the SUSE Linux Enterprise Workstation Extension can be selected for installation. Note that installing it requires a valid registration key.

Extension Selection
Figure 3.6: Extension Selection

Proceed with Next to the Add-on Product dialog, where you can specify sources for additional add-on products not available on the registration server.

If you do not want to install add-ons, proceed with Next. Otherwise activate I would like to install an additional Add-on Product. Specify the Media Type by choosing from CD, DVD, Hard Disk, USB Mass Storage, a Local Directory or a Local ISO Image. If network access has been configured you can choose from additional remote sources such as HTTP, SLP, FTP, etc. Alternatively you may directly specify a URL. Check Download Repository Description Files to download the files describing the repository now. If deactivated, they will be downloaded after the installation starts. Proceed with Next and insert a CD or DVD if required.

Depending on the add-on's content, it may be necessary to accept additional license agreements. If you have chosen an add-on product requiring a registration key, you will be asked to enter it at the Extension and Module Registration Codes page. Proceed with Next.

Add-on Product
Figure 3.7: Add-on Product
Tip
Tip: No Registration Key Error

If you have chosen a product in the Extension Selection dialog for which you do not have a valid registration key, choose Back until you see the Extension Selection dialog. Deselect the module or extension and proceed with Next. Modules or extensions can also be installed at any time later from the running system as described in Chapter 11, Installing Modules, Extensions, and Third Party Add-On Products.

3.9 Suggested Partitioning

Define a partition setup for SUSE Linux Enterprise Desktop in this step. The installer creates a proposal for one of the available disks containing a root partition formatted with Btrfs, a swap partition, and a home partition formatted with XFS. On hard disks smaller than 20 GB the proposal does not include a separate home partition. If one or more swap partitions have been detected on the available hard disks, these partitions will be used. You have several options to proceed:

Next

To accept the proposal without any changes, click Next to proceed with the installation workflow.

Edit Proposal Settings

To adjust the proposal choose Edit Proposal Settings. The pop-up dialog lets you switch to an LVM-based Proposal or an Encrypted LVM-based Proposal. You may also adjust file systems for the proposed partitions, create a separate home partition, and enlarge the swap partition (to enable suspend to disk, for example).

If the root file system format is Btrfs, you can also enable Btrfs snapshots here.

Create Partition Setup

Use this option to move the proposal described above to a different disk. Select a specific disk from the list. If the chosen hard disk does not contain any partitions yet, the whole hard disk will be used for the proposal. Otherwise, you can choose which existing partition(s) to use. Edit Proposal Settings lets you fine-tune the proposal.

Expert Partitioner

To create a custom partition setup choose Expert Partitioner. The Expert Partitioner opens, displaying the current partition setup for all hard disks, including the proposal suggested by the installer. You can Add, Edit, Resize, or Delete partitions.

You can also set up Logical Volumes (LVM), configure software RAID and device mapping (DM), encrypt Partitions, mount NFS shares and manage tmpfs volumes with the Expert Partitioner. To fine-tune settings such as the subvolume and snapshot handling for each Btrfs partition, choose Btrfs. For more information about custom partitioning and configuring advanced features, refer to Section 9.1, “Using the YaST Partitioner”.

Warning
Warning: Custom Partitioning on UEFI Machines

A UEFI machine requires an EFI system partition that must be mounted to /boot/efi. This partition must be formatted with the FAT file system.

If an EFI system partition is already present on your system (for example from a previous Windows installation) use it by mounting it to /boot/efi without formatting it.

Warning
Warning: Custom Partitioning and Snapper

SUSE Linux Enterprise Desktop can be configured to support snapshots which provide the ability to do rollbacks of system changes. SUSE Linux Enterprise Desktop uses Snapper in conjunction with Btrfs for this feature. Btrfs needs to be set up with snapshots enabled for the root partition. Refer to Chapter 7, System Recovery and Snapshot Management with Snapper for details on Snapper.

Being able to create system snapshots that enable rollbacks requires most of the system directories to be mounted on a single partition. Refer to Section 7.1, “Default Setup” for more information. This also includes /usr and /var. Only directories that are excluded from snapshots (see Section 7.1.2, “Directories That Are Excluded from Snapshots” for a list) may reside on separate partitions. Among others, this list includes /usr/local, /var/log, and /tmp.

If you do not plan to use Snapper for system rollbacks, the partitioning restrictions mentioned above do not apply.

Important
Important: Btrfs on an Encrypted Root Partition

The default partitioning setup suggests the root partition as Btrfs with /boot being a directory. To encrypt the root partition, make sure to use the GPT partition table type instead of the default MSDOS type. Otherwise the GRUB2 boot loader may not have enough space for the second stage loader.

Note
Note: Supported Software RAID Volumes

Installing to and booting from existing software RAID volumes is supported for Disk Data Format (DDF) volumes and Intel Matrix Storage Manager (IMSM) volumes. IMSM is also known by the following names:

  • Intel Rapid Storage Technology

  • Intel Matrix Storage Technology

  • Intel Application Accelerator / Intel Application Accelerator RAID Edition

Note
Note: Mount Points for FCoE and iSCSI Devices

FCoE and iSCSI devices will appear asynchronously during the boot process. While the initrd guarantees that those devices are set up correctly for the root file system, there are no such guarantees for any other file systems or mount points like /usr. Hence any system mount points like /usr or /var are not supported. To use those devices, ensure correct synchronization of the respective services and devices.

Important
Important: Handling of Windows Partitions in Proposals

In case the disk selected for the suggested partitioning proposal contains a large Windows FAT or NTFS partition, it will automatically be resized to make room for the SUSE Linux Enterprise Desktop installation. To avoid data loss it is strongly recommended to

  • make sure the partition is not fragmented (run a defragmentation program from Windows prior to the SUSE Linux Enterprise Desktop installation)

  • double-check the suggested size for the Windows partition is big enough

  • back up your data prior to the SUSE Linux Enterprise Desktop installation

To adjust the proposed size of the Windows partition, use the Expert Partitioner.

Partitioning
Figure 3.8: Partitioning

3.10 Clock and Time Zone

In this dialog, select your region and time zone. Both are preselected according to the installation language. To change the preselected values, either use the map or the drop-down boxes for Region and Time Zone. When using the map, point the cursor at the rough direction of your region and left-click to zoom. Now choose your country or region by left-clicking. Right-click to return to the world map.

To set up the clock, choose whether the Hardware Clock is Set to UTC. If you run another operating system on your machine, such as Microsoft Windows, it is likely your system uses local time instead. If you run Linux on your machine, set the hardware clock to UTC and have the switch from standard time to daylight saving time performed automatically.

Important
Important: Set the Hardware Clock to UTC

The switch from standard time to daylight saving time (and vice versa) can only be performed automatically when the hardware clock (CMOS clock) is set to UTC. This also applies if you use automatic time synchronization with NTP, because automatic synchronization will only be performed if the time difference between the hardware and system clock is less than 15 minutes.

Since a wrong system time can cause serious problems (missed backups, dropped mail messages, mount failures on remote file systems, etc.), it is strongly recommended to always set the hardware clock to UTC.

Clock and Time Zone
Figure 3.9: Clock and Time Zone

If a network is already configured, you can configure time synchronization with an NTP server. Click Other Settings to either alter the NTP settings or to Manually set the time. See Chapter 25, Time Synchronization with NTP for more information on configuring the NTP service. When finished, click Accept to continue the installation.

If running without NTP configured, consider setting SYSTOHC=no (sysconfig variable) to avoid saving unsynchronized time into the hardware clock.

3.11 Create New User

Create a local user in this step. After entering the first name and last name, either accept the proposal or specify a new User name that will be used to log in. Only use lowercase letters (a-z), digits (0-9) and the characters . (dot), - (hyphen) and _ (underscore). Special characters, umlauts and accented characters are not allowed.

Finally, enter a password for the user. Re-enter it for confirmation (to ensure that you did not type something else by mistake). To provide effective security, a password should be at least six characters long and consist of uppercase and lowercase letters, number and special characters (7-bit ASCII). Umlauts or accented characters are not allowed. Passwords you enter are checked for weakness. When entering a password that is easy to guess (such as a dictionary word or a name) you will see a warning. It is a good security practice to use strong passwords.

Important
Important: User Name and Password

Remember both your user name and the password because they are needed each time you log in to the system.

If you install SUSE Linux Enterprise Desktop on a machine with one or more existing Linux installations, YaST allows you to import user data such as user names and passwords. Select Import User Data from a Previous Installation and then Choose Users for import.

If you do not want to configure any local users (for example when setting up a client on a network with centralized user authentication), skip this step by choosing Next and confirming the warning. Network user authentication can be configured at any time later in the installed system; refer to Chapter 13, Managing Users with YaST for instructions.

Create New User
Figure 3.10: Create New User

Two additional options are available:

Use this Password for System Administrator

If checked, the same password you have entered for the user will be used for the system administrator root. This option is suitable for stand-alone workstations or machines in a home network that are administrated by a single user. When not checked, you are prompted for a system administrator password in the next step of the installation workflow (see Section 3.12, “Password for the System Administrator root).

Automatic Login

This option automatically logs the current user in to the system when it starts. This is mainly useful if the computer is operated by only one user.

Warning
Warning: Automatic Login

With the automatic login enabled, the system boots straight into your desktop with no authentication. If you store sensitive data on your system, you should not enable this option if the computer can also be accessed by others.

3.11.1 Expert Settings

Click Change in the Create User dialog to import users from a previous installation (if present). Also change the password encryption type in this dialog.

The default authentication method is Local (/etc/passwd). If a former version of SUSE Linux Enterprise Desktop or another system using /etc/passwd is detected, you may import local users. To do so, check Read User Data from a Previous Installation and click Choose. In the next dialog, select the users to import and finish with OK.

By default the passwords are encrypted with the SHA-512 hash function. Changing this method is not recommended unless needed for compatibility reasons.

3.12 Password for the System Administrator root

If you have not chosen Use this Password for System Administrator in the previous step, you will be prompted to enter a password for the System Administrator root. Otherwise this configuration step is skipped.

root is the name of the superuser, or the administrator of the system. Unlike regular users, root has unlimited rights to change the system configuration, install programs, and set up new hardware. If users forget their passwords or have other problems with the system, root can help. The root account should only be used for system administration, maintenance, and repair. Logging in as root for daily work is rather risky: a single mistake could lead to irretrievable loss of system files.

For verification purposes, the password for root must be entered twice. Do not forget the root password. After having been entered, this password cannot be retrieved.

Password for the System Administrator root
Figure 3.11: Password for the System Administrator root
Tip
Tip: Passwords and Keyboard Layout

It is recommended to only use characters that are available on an English keyboard. In case of a system error or when you need to start your system in rescue mode a localized keyboard might not be available.

The root password can be changed any time later in the installed system. To do so run YaST and start Security and Users › User and Group Management.

Important
Important: The root User

The user root has all the permissions needed to make changes to the system. To carry out such tasks, the root password is required. You cannot carry out any administrative tasks without this password.

3.13 Installation Settings

On the last step before the real installation takes place, you can alter installation settings suggested by the installer. To modify the suggestions, click the respective headline. After having made changes to a particular setting, you are always returned to the Installation Settings window, which is updated accordingly.

Installation Settings
Figure 3.12: Installation Settings

3.13.1 Software

SUSE Linux Enterprise Desktop contains several software patterns for various application purposes. Click Software to open the Software Selection and System Tasks screen where you can modify the pattern selection according to your needs. Select a pattern from the list and see a description in the right-hand part of the window. Each pattern contains several software packages needed for specific functions (for example Multimedia or Office software). For a more detailed selection based on software packages to install, select Details to switch to the YaST Software Manager.

You can also install additional software packages or remove software packages from your system at any later time with the YaST Software Manager. For more information, refer to Chapter 10, Installing or Removing Software.

Software Selection and System Tasks
Figure 3.13: Software Selection and System Tasks
Tip
Tip: Adding Secondary Languages

The language you selected with the first step of the installation will be used as the primary (default) language for the system. You can add secondary languages from within the Software dialog by choosing Details › View › Languages.

3.13.2 Booting

The installer proposes a boot configuration for your system. Other operating systems found on your computer, such as Microsoft Windows or other Linux installations, will automatically be detected and added to the boot loader. However, SUSE Linux Enterprise Desktop will be booted by default. Normally, you can leave these settings unchanged. If you need a custom setup, modify the proposal according to your needs. For information, see Section 13.3, “Configuring the Boot Loader with YaST”.

Important
Important: Software RAID 1

Booting a configuration where /boot resides on a software RAID 1 device is supported, but it requires to install the boot loader into the MBR (Boot Loader Location › Boot from Master Boot Record). Having /boot on software RAID devices with a level other than RAID 1 is not supported.

3.13.3 Firewall and SSH

By default SuSEFirewall2 is enabled on all configured network interfaces. To globally disable the firewall for this computer, click Disable (not recommended).

Note
Note: Firewall Settings

If the firewall is activated, all interfaces are configured to be in the External Zone, where all ports are closed by default, ensuring maximum security. The only port you can open during the installation is port 22 (SSH), to allow remote access. All other services requiring network access (such as FTP, Samba, Web server, etc.) will only work after having adjusted the firewall settings. Refer to Chapter 15, Masquerading and Firewalls for more information.

To enable remote access via the secure shell (SSH), make sure the SSH service is enabled and the SSH port is open.

Tip
Tip: Existing SSH Host Keys

If you install SUSE Linux Enterprise Desktop on a machine with one or more existing Linux installations, the installation routine imports the SSH host key with the most recent access time from an existing installation by default. See also Section 3.13.5, “Import SSH Host Keys and Configuration.

If you are performing a remote administration over VNC, you can also specify whether the machine should be accessible via VNC after the installation. Note that enabling VNC also requires you to set the Default systemd Target to graphical.

3.13.4 Default systemd Target

SUSE Linux Enterprise Desktop can boot into two different targets (formerly known as runlevels). The graphical target starts a display manager, whereas the multi-user target starts the command line interface.

The default target is graphical. In case you have not installed the X Window System patterns, you need to change it to multi-user. If the system should be accessible via VNC, you need to choose graphical.

3.13.5 Import SSH Host Keys and Configuration

If an existing Linux installation on your computer was detected, YaST will import the most recent SSH host key found in /etc/ssh by default, optionally including other files in the directory as well. This makes it possible to reuse the SSH identity of the existing installation, avoiding the REMOTE HOST IDENTIFICATION HAS CHANGED warning on the first connection. Note that this item is not shown in the installation summary if YaST has not discovered any other installations.

Import SSH Host Keys and Configuration
Figure 3.14: Import SSH Host Keys and Configuration
I would like to import SSH keys from a previous install:

Select this option if you want to import the SSH host key and optionally the configuration of an installed system. You can select the installation to import from in the option list below.

Import SSH Configuration

Enable this to copy other files in /etc/ssh to the installed system in addition to the host keys.

3.13.6 System Information

This screen lists all the hardware information the installer could obtain about your computer. When opened for the first time, the hardware detection is started. Depending on your system, this may take some time. Select any item in the list and click Details to see detailed information about the selected item. Use Save to File to save a detailed list to either the local file system or a removable device.

Advanced users can also change the PCI ID Setup and kernel settings by choosing Kernel Settings. A screen with two tabs opens:

PCI ID Setup

Each kernel driver contains a list of device IDs of all devices it supports. If a new device is not in any driver's database, the device is treated as unsupported, even if it can be used with an existing driver. You can add PCI IDs to a device driver here. Only advanced users should attempt to do so.

To add an ID, click Add and select whether to Manually enter the data, or whether to choose from a list. Enter the required data. The SysFS Dir is the directory name from /sys/bus/pci/drivers—if empty, the driver name is used as the directory name. Existing entries can be managed with Edit and Delete.

Kernel Settings

Change the Global I/O Scheduler here. If Not Configured is chosen, the default setting for the respective architecture will be used. This setting can also be changed at any time later from the installed system. Refer to Chapter 12, Tuning I/O Performance for details on I/O tuning.

Also activate the Enable SysRq Keys here. These keys will let you issue basic commands (such as rebooting the system or writing kernel dumps) in case the system crashes. Enabling these keys is recommended when doing kernel development. Refer to https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html for details.

3.14 Performing the Installation

After configuring all installation settings, click Install in the Installation Settings window to start the installation. Some software may require a license confirmation. If your software selection includes such software, license confirmation dialogs are displayed. Click Accept to install the software package. When not agreeing to the license, click I Disagree and the software package will not be installed. In the dialog that follows, confirm with Install again.

The installation usually takes between 15 and 30 minutes, depending on the system performance and the selected software scope. After having prepared the hard disk and having saved and restored the user settings, the software installation starts. During this procedure a slide show introduces the features of SUSE Linux Enterprise Desktop. Choose Details to switch to the installation log or Release Notes to read important up-to-date information that was not available when the manuals were printed.

After the software installation has completed, the system reboots into the new installation where you can log in. To customize the system configuration or to install additional software packages, start YaST.

Note
Note: One-Stage Installation

Starting with SUSE Linux Enterprise Desktop 12 the system installation and basic configuration including the network setup is done in a single stage. After having rebooted into the installed system, you can log in and start using the system. To fine-tune the setup, to configure services or to install additional software, start YaST.

4 Cloning Disk Images

  • Filename: deployment_image.xml
  • ID: cha.deployment.clone_image

If SUSE Linux Enterprise Desktop is installed in a virtualized environment, cloning an existing installation may be the fastest way to deploy further machines. SUSE Linux Enterprise Desktop provides a script to clean up configuration that is unique to each installation. With the introduction of systemd, unique system identifiers are used and set in different locations and files. Therefore, cloning is no longer the recommended way to build system images. Images can be created with KIWI, see https://doc.opensuse.org/projects/kiwi/doc/

To clone disks of machines, refer to the documentation of your virtualization environment.

4.1 Cleaning Up Unique System Identifiers

Warning
Warning: Important Configuration Loss

Executing the following procedure permanently deletes important system configuration data. If the source system for the clone is used in production, run the clean up script on the cloned image.

To clean all unique system identifiers, execute the following procedure before or after cloning a disk image. If run on the clone, this procedure needs to be run on each clone. Therefore, we recommend to create a golden image that is not used in production and only serves as a source for new clones. The golden image is already cleaned up and clones can be used immediately.

The clone-master-clean-up command for example removes:

  • Swap files

  • Zypper repositories

  • SSH host and client keys

  • Temporary directories, like /tmp/*

  • Postfix data

  • HANA firewall script

  • systemd journal

  1. Use zypper to install clone-master-clean-up:

    root # zypper install clone-master-clean-up
  2. Configure the behavior of clone-master-clean-up by editing the /etc/sysconfig/clone-master-clean-up. This configuration file defines if users with a UID larger than 1000, the /etc/sudoers file, Zypper repositories and Btrfs snapshots should be removed.

  3. Remove existing configuration and unique identifiers by running the script:

    root # clone-master-clean-up

Part III Setting Up an Installation Server

5 Setting Up the Server Holding the Installation Sources

SUSE® Linux Enterprise Desktop can be installed in different ways. Apart from the usual media installation covered in Chapter 3, Installation with YaST, you can choose from various network-based approaches or even opt for an unattended installation of SUSE Linux Enterprise Desktop.

6 Preparing the Boot of the Target System

SUSE® Linux Enterprise Desktop can be installed in different ways. Apart from the usual media installation covered in Chapter 3, Installation with YaST, you can choose from various network-based approaches or even take a completely hands-off approach to the installation of SUSE Linux Enterprise Desk…

5 Setting Up the Server Holding the Installation Sources

  • Filename: deployment_installserver.xml
  • ID: cha.deployment.instserver

SUSE® Linux Enterprise Desktop can be installed in different ways. Apart from the usual media installation covered in Chapter 3, Installation with YaST, you can choose from various network-based approaches or even opt for an unattended installation of SUSE Linux Enterprise Desktop.

Each method is introduced by means of two short checklists: one listing the prerequisites for this method and the other illustrating the basic procedure. More detail is then provided for all the techniques used in these installation scenarios.

Note
Note: Terminology

In the following sections, the system to hold your new SUSE Linux Enterprise Desktop installation is called target system or installation target. The term repository (previously called installation source) is used for all sources of installation data. This includes physical media, such as CD and DVD, and network servers distributing the installation data in your network.

Depending on the operating system of the machine used as the network installation source for SUSE Linux Enterprise Desktop, there are several options for the server configuration. The easiest way to set up an installation server is to use YaST on SUSE Linux Enterprise Server or openSUSE.

Tip
Tip: Installation Server Operating System

You can even use a Microsoft Windows machine as the installation server for your Linux deployment. See Section 5.5, “Managing an SMB Repository” for details.

5.1 Setting Up an Installation Server Using YaST

YaST offers a graphical tool for creating network repositories. It supports HTTP, FTP, and NFS network installation servers.

  1. Log in as root to the machine that should act as installation server.

  2. Start YaST › Miscellaneous › Installation Server.

  3. Select the repository type (HTTP, FTP, or NFS). The selected service is started automatically every time the system starts. If a service of the selected type is already running on your system and you want to configure it manually for the server, deactivate the automatic configuration of the server service with Do Not Configure Any Network Services. In both cases, define the directory in which the installation data should be made available on the server.

  4. Configure the required repository type. This step relates to the automatic configuration of server services. It is skipped when automatic configuration is deactivated.

    Define an alias for the root directory of the FTP or HTTP server on which the installation data should be found. The repository will later be located under ftp://Server-IP/Alias/Name (FTP) or under http://Server-IP/Alias/Name (HTTP). Name stands for the name of the repository, which is defined in the following step. If you selected NFS in the previous step, define wild cards and export options. The NFS server will be accessible under nfs://Server-IP/Name. Details of NFS and exports can be found in Chapter 26, Sharing File Systems with NFS.

    Tip
    Tip: Firewall Settings

    Make sure that the firewall settings of your server system allow traffic on the ports for HTTP, NFS, and FTP. If they currently do not, enable Open Port in Firewall or check Firewall Details first.

  5. Configure the repository. Before the installation media are copied to their destination, define the name of the repository (ideally, an easily remembered abbreviation of the product and version). YaST allows providing ISO images of the media instead of copies of the installation DVDs. If you want this, activate the relevant check box and specify the directory path under which the ISO files can be found locally. Depending on the product to distribute using this installation server, it might be necessary to add additional media, such as service pack DVDs as extra repositories. To announce your installation server in the network via OpenSLP, activate the appropriate option.

    Tip
    Tip: Announcing the Repository

    Consider announcing your repository via OpenSLP if your network setup supports this option. This saves you from entering the network installation path on every target machine. The target systems are booted using the SLP boot option and find the network repository without any further configuration. For details on this option, refer to Section 7.2, “Booting the Target System for Installation”.

  6. Configuring extra repositories. YaST follows a specific naming convention to configure add-on CDs or service pack CDs repositories. Configuration is accepted only if the repository name of the add-on CDs starts with the repository name of the installation media, In other words, if you chose SLES12SP1 as repository name for DVD1 than you should chose SLES12SP1addon repository name for DVD2. Same applies to SDK CDs.

  7. Upload the installation data. The most lengthy step in configuring an installation server is copying the actual installation media. Insert the media in the sequence requested by YaST and wait for the copying procedure to end. When the sources have been fully copied, return to the overview of existing repositories and close the configuration by selecting Finish.

    Your installation server is now fully configured and ready for service. It is automatically started every time the system is started. No further intervention is required. You only need to configure and start this service correctly by hand if you have deactivated the automatic configuration of the selected network service with YaST as an initial step.

To deactivate a repository, select the repository to remove then select Delete. The installation data are removed from the system. To deactivate the network service, use the respective YaST module.

If your installation server needs to provide the installation data for more than one product of the product version, start the YaST installation server module and select Add in the overview of existing repositories to configure the new repository.

5.2 Setting Up an NFS Repository Manually

Important
Important

This assumes that you are using some kind of SUSE Linux-based operating system on the machine that will serve as installation server. If this is not the case, turn to the other vendor's documentation on NFS instead of following these instructions.

Setting up an NFS source for installation is done in two main steps. In the first step, create the directory structure holding the installation data and copy the installation media over to this structure. Second, export the directory holding the installation data to the network.

To create a directory to hold the installation data, proceed as follows:

  1. Log in as root.

  2. Create a directory that will later hold all installation data and change into this directory. For example:

    root # mkdir /srv/install/PRODUCT/PRODUCTVERSION
    root # cd /srv/install/PRODUCT/PRODUCTVERSION

    Replace PRODUCT with an abbreviation of the product name and PRODUCTVERSION with a string that contains the product name and version.

  3. For each DVD contained in the media kit execute the following commands:

    1. Copy the entire content of the installation DVD into the installation server directory:

      root # cp -a /media/PATH_TO_YOUR_DVD_DRIVE .

      Replace PATH_TO_YOUR_DVD_DRIVE with the actual path under which your DVD drive is addressed. Depending on the type of drive used in your system, this can be cdrom, cdrecorder, dvd, or dvdrecorder.

    2. Rename the directory to the DVD number:

      root # mv PATH_TO_YOUR_DVD_DRIVE DVDX

      Replace X with the actual number of your DVD.

On SUSE Linux Enterprise Desktop, you can export the repository with NFS using YaST. Proceed as follows:

  1. Log in as root.

  2. Start YaST › Network Services › NFS Server.

  3. Select Start and Open Port in Firewall and click Next.

  4. Select Add Directory and browse for the directory containing the installation sources, in this case, PRODUCTVERSION.

  5. Select Add Host and enter the host names of the machines to which to export the installation data. Instead of specifying host names here, you could also use wild cards, ranges of network addresses, or the domain name of your network. Enter the appropriate export options or leave the default, which works fine in most setups. For more information about the syntax used in exporting NFS shares, read the exports man page.

  6. Click Finish. The NFS server holding the SUSE Linux Enterprise Desktop repository is automatically started and integrated into the boot process.

If you prefer manually exporting the repository via NFS instead of using the YaST NFS Server module, proceed as follows:

  1. Log in as root.

  2. Open the file /etc/exports and enter the following line:

    /PRODUCTVERSION *(ro,root_squash,sync)

    This exports the directory /PRODUCTVERSION to any host that is part of this network or to any host that can connect to this server. To limit the access to this server, use netmasks or domain names instead of the general wild card *. Refer to the export man page for details. Save and exit this configuration file.

  3. To add the NFS service to the list of servers started during system boot, execute the following commands:

    root # systemctl enable nfsserver
  4. Start the NFS server with systemctl start nfsserver. If you need to change the configuration of your NFS server later, modify the configuration file and restart the NFS daemon with systemctl restart nfsserver.

Announcing the NFS server via OpenSLP makes its address known to all clients in your network.

  1. Log in as root.

  2. Create the /etc/slp.reg.d/install.suse.nfs.reg configuration file with the following lines:

    # Register the NFS Installation Server
    service:install.suse:nfs://$HOSTNAME/PATH_TO_REPOSITORY/DVD1,en,65535
    description=NFS Repository

    Replace PATH_TO_REPOSITORY with the actual path to the installation source on your server.

  3. Start the OpenSLP daemon with systemctl start slpd.

5.3 Setting Up an FTP Repository Manually

Creating an FTP repository is very similar to creating an NFS repository. An FTP repository can be announced over the network using OpenSLP as well.

  1. Create a directory holding the installation sources as described in Section 5.2, “Setting Up an NFS Repository Manually”.

  2. Configure the FTP server to distribute the contents of your installation directory:

    1. Log in as root and install the package vsftpd using the YaST software management.

    2. Enter the FTP server root directory:

      root # cd /srv/ftp
    3. Create a subdirectory holding the installation sources in the FTP root directory:

      root # mkdir REPOSITORY

      Replace REPOSITORY with the product name.

    4. Mount the contents of the installation repository into the change root environment of the FTP server:

      root # mount --bind PATH_TO_REPOSITORY /srv/ftp/REPOSITORY

      Replace PATH_TO_REPOSITORY and REPOSITORY with values matching your setup. If you need to make this permanent, add it to /etc/fstab.

    5. Start vsftpd with vsftpd.

  3. Announce the repository via OpenSLP, if this is supported by your network setup:

    1. Create the /etc/slp.reg.d/install.suse.ftp.reg configuration file with the following lines:

      # Register the FTP Installation Server
      service:install.suse:ftp://$HOSTNAME/REPOSITORY/DVD1,en,65535
      description=FTP Repository

      Replace REPOSITORY with the actual name to the repository directory on your server. The service: line should be entered as one continuous line.

    2. Start the OpenSLP daemon with systemctl start slpd.

5.4 Setting Up an HTTP Repository Manually

Creating an HTTP repository is very similar to creating an NFS repository. An HTTP repository can be announced over the network using OpenSLP as well.

  1. Create a directory holding the installation sources as described in Section 5.2, “Setting Up an NFS Repository Manually”.

  2. Configure the HTTP server to distribute the contents of your installation directory:

    1. Install the Web server Apache.

    2. Enter the root directory of the HTTP server (/srv/www/htdocs) and create the subdirectory that will hold the installation sources:

      root # mkdir REPOSITORY

      Replace REPOSITORY with the product name.

    3. Create a symbolic link from the location of the installation sources to the root directory of the Web server (/srv/www/htdocs):

      root # ln -s /PATH_TO_REPOSITORY/srv/www/htdocs/REPOSITORY
    4. Modify the configuration file of the HTTP server (/etc/apache2/default-server.conf) to make it follow symbolic links. Replace the following line:

      Options None

      with

      Options Indexes FollowSymLinks
    5. Reload the HTTP server configuration using systemctl reload apache2.

  3. Announce the repository via OpenSLP, if this is supported by your network setup:

    1. Create the /etc/slp.reg.d/install.suse.http.reg configuration file with the following lines:

      # Register the HTTP Installation Server
      service:install.suse:http://$HOSTNAME/REPOSITORY/DVD1/,en,65535
      description=HTTP Repository

      Replace REPOSITORY with the actual path to the repository on your server. The service: line should be entered as one continuous line.

    2. Start the OpenSLP daemon using systemctl start slpd.

5.5 Managing an SMB Repository

Using SMB, you can import the installation sources from a Microsoft Windows server and start your Linux deployment even with no Linux machine around.

To set up an exported Windows Share holding your SUSE Linux Enterprise Desktop repository, proceed as follows:

  1. Log in to your Windows machine.

  2. Create a new directory that will hold the entire installation tree and name it INSTALL, for example.

  3. Export this share according the procedure outlined in your Windows documentation.

  4. Enter this share and create a subdirectory, called PRODUCT. Replace PRODUCT with the actual product name.

  5. Enter the INSTALL/PRODUCT directory and copy each DVD to a separate directory, such as DVD1 and DVD2.

To use an SMB mounted share as a repository, proceed as follows:

  1. Boot the installation target.

  2. Select Installation.

  3. Press F4 for a selection of the repository.

  4. Choose SMB and enter the Windows machine's name or IP address, the share name (INSTALL/PRODUCT/DVD1, in this example), user name, and password. The syntax looks like this:

    smb://workdomain;user:password@server/INSTALL/DVD1

    After you press Enter, YaST starts and you can perform the installation.

5.6 Using ISO Images of the Installation Media on the Server

Instead of copying physical media into your server directory manually, you can also mount the ISO images of the installation media into your installation server and use them as a repository. To set up an HTTP, NFS or FTP server that uses ISO images instead of media copies, proceed as follows:

  1. Download the ISO images and save them to the machine to use as the installation server.

  2. Log in as root.

  3. Choose and create an appropriate location for the installation data, as described in Section 5.2, “Setting Up an NFS Repository Manually”, Section 5.3, “Setting Up an FTP Repository Manually”, or Section 5.4, “Setting Up an HTTP Repository Manually”.

  4. Create subdirectories for each DVD.

  5. To mount and unpack each ISO image to the final location, issue the following command:

    root # mount -o loop PATH_TO_ISO PATH_TO_REPOSITORY/PRODUCT/MEDIUMX

    Replace PATH_TO_ISO with the path to your local copy of the ISO image, PATH_TO_REPOSITORY with the source directory of your server, PRODUCT with the product name, and MEDIUMX with the type (CD or DVD) and number of media you are using.

  6. Repeat the previous step to mount all ISO images needed for your product.

  7. Start your installation server as usual, as described in Section 5.2, “Setting Up an NFS Repository Manually”, Section 5.3, “Setting Up an FTP Repository Manually”, or Section 5.4, “Setting Up an HTTP Repository Manually”.

To automatically mount the ISO images at boot time, add the respective mount entries to /etc/fstab. An entry according to the previous example would look like the following:

PATH_TO_ISO PATH_TO_REPOSITORY/PRODUCTMEDIUM auto loop

6 Preparing the Boot of the Target System

  • Filename: deployment_prep_boot.xml
  • ID: cha.deployment.prep_boot

SUSE® Linux Enterprise Desktop can be installed in different ways. Apart from the usual media installation covered in Chapter 3, Installation with YaST, you can choose from various network-based approaches or even take a completely hands-off approach to the installation of SUSE Linux Enterprise Desktop.

The examples use NFS for serving the installation data. If you want to use FTP, SMB or HTTP, see Chapter 5, Setting Up the Server Holding the Installation Sources.

Note
Note: Terminology

In the following sections, the system to hold your new SUSE Linux Enterprise Desktop installation is called target system or installation target. The term repository (previously called installation source) is used for all sources of installation data. This includes physical media, such as CD and DVD, and network servers distributing the installation data in your network.

This section covers the configuration tasks needed in complex boot scenarios. It contains ready-to-apply configuration examples for DHCP, PXE boot, TFTP, and Wake on LAN.

The examples assume that the DHCP, TFTP and NFS server reside on the same machine with the IP 192.168.1.1. All services can reside on different machines without any problems. Make sure to change the IP addresses as required.

6.1 Setting Up a DHCP Server

In addition to providing automatic address allocation to your network clients, the DHCP server announces the IP address of the TFTP server and the file that needs to be pulled in by the installation routines on the target machine. The file that has to be loaded depends on the architecture of the target machine and whether legacy BIOS or UEFI boot is used.

  1. Log in as root to the machine hosting the DHCP server.

  2. Enable the DHCP server by executing systemctl enable dhcpd.

  3. Append the following lines to a subnet configuration of your DHCP server's configuration file located under /etc/dhcpd.conf:

    # The following lines are optional
    option domain-name "my.lab";
    option domain-name-servers 192.168.1.1;
    option routers 192.168.1.1;
    option ntp-servers 192.168.1.1;
    ddns-update-style none;
    default-lease-time 3600;
    
    # The following lines are required
    option arch code 93 = unsigned integer 16; # RFC4578
    subnet 192.168.1.0 netmask 255.255.255.0 {
     next-server 192.168.1.1;
     range 192.168.1.100 192.168.1.199;
     default-lease-time 3600;
     max-lease-time 3600;
     if option arch = 00:07 or option arch = 00:09 {
       filename "/EFI/x86/grub.efi";
     }
     else if option arch = 00:0b {
       filename "/EFI/aarch64/bootaa64.efi";
     }
     else  {
       filename "/BIOS/x86/pxelinux.0";
     }
    }

    This configuration example uses the subnet 192.168.1.0/24 with the DHCP, DNS and gateway on the server with the IP 192.168.1.1. Make sure that all used IP addresses are changed according to your network layout. For more information about the options available in dhcpd.conf, refer to the dhcpd.conf manual page.

  4. Restart the DHCP server by executing systemctl restart dhcpd.

If you plan to use SSH for the remote control of a PXE and Wake on LAN installation, specify the IP address DHCP should provide to the installation target. To achieve this, modify the above mentioned DHCP configuration according to the following example:

group {
 host test {
   hardware ethernet MAC_ADDRESS;
   fixed-address IP_ADDRESS;
   }
}

The host statement introduces the host name of the installation target. To bind the host name and IP address to a specific host, you must know and specify the system's hardware (MAC) address. Replace all the variables used in this example with the actual values that match your environment.

After restarting the DHCP server, it provides a static IP to the host specified, enabling you to connect to the system via SSH.

6.2 Setting Up a TFTP Server

If using a SUSE based installation, you may use YaST to set up a TFTP Server. Alternatively, set it up manually. The TFTP server delivers the boot image to the target system after it boots and sends a request for it.

6.2.1 Setting Up a TFTP Server Using YaST

  1. Log in as root.

  2. Start YaST › Network Services › TFTP Server and install the requested package.

  3. Click Enable to make sure that the server is started and included in the boot routines. No further action from your side is required to secure this. xinetd starts tftpd at boot time.

  4. Click Open Port in Firewall to open the appropriate port in the firewall running on your machine. If there is no firewall running on your server, this option is not available.

  5. Click Browse to browse for the boot image directory. The default directory /srv/tftpboot is created and selected automatically.

  6. Click Finish to apply your settings and start the server.

6.2.2 Setting Up a TFTP Server Manually

  1. Log in as root and install the packages tftp and xinetd.

  2. Modify the configuration of xinetd located under /etc/xinetd.d to make sure that the TFTP server is started on boot:

    1. If it does not exist, create a file called tftp under this directory with touch tftp. Then run chmod 755 tftp.

    2. Open the file tftp and add the following lines:

      service tftp
      {
              socket_type            = dgram
              protocol               = udp
              wait                   = yes
              user                   = root
              server                 = /usr/sbin/in.tftpd
              server_args            = -s /srv/tftpboot
              disable                = no
      }
    3. Save the file and restart xinetd with systemctl restart xinetd.

6.3 Installing Files on TFTP Server

The following procedures describe how to prepare the server for target machines with UEFI and BIOS on x86 architectures with 32 and 64 bits. The prepared structure also already provides for AArch64 systems.

6.3.1 Preparing the Structure

In this procedure, replace OS_VERSION and SP_VERSION with the used operating system and service pack version. For example, use sles12 and sp3.

  1. Create a structure in /srv/tftpboot to support the various options.

    root # mkdir -p /srv/tftpboot/BIOS/x86
    root # mkdir -p /srv/tftpboot/EFI/x86/boot
    root # mkdir -p /srv/tftpboot/EFI/aarch64/boot
    root # mkdir -p /srv/install/x86/OS_VERSION/SP_VERSION/cd1
    root # mkdir -p /srv/install/aarch64/OS_VERSION/SP_VERSION/cd1
  2. Download the DVD ISO images of SUSE Linux Enterprise Desktop 12 SP3 from the SUSE Web site for all architectures you need.

  3. Mount the ISO files as described in Section 5.6, “Using ISO Images of the Installation Media on the Server”. To have the files available after a reboot, create an entry in /etc/fstab. For a standard installation, only DVD 1 is required.

    root # mount -o loop PATH_TO_ISO /srv/install/ARCH/OS_VERSION/SP_VERSION/cd1/

    Repeat this step for all required architectures and replace ARCH with x86 or aarch64 and PATH_TO_ISO with the path to the corresponding ISO file.

  4. Copy the kernel, initrd and message files required for x86 BIOS and UEFI boot to the appropriate location.

    root # cd /srv/install/x86/OS_version/SP_version/cd1/boot/x86_64/loader/
    root # cp -a linux initrd message /srv/tftpboot/BIOS/x86/
  5. Ensure that the path /srv/install is available via NFS. For details, see Section 5.2, “Setting Up an NFS Repository Manually”.

6.3.2 BIOS Files for x86

  1. Copy pxelinux.0 into the TFTP folder and prepare a subfolder for the configuration file.

    root # cp /usr/share/syslinux/pxelinux.0 /srv/tftpboot/BIOS/x86/
    root # mkdir /srv/tftpboot/BIOS/x86/pxelinux.cfg
  2. Create /srv/tftpboot/BIOS/x86/pxelinux.cfg/default and add the following lines:

    default install
    
    # hard disk
    label harddisk
     localboot -2
    # install
    label install
     kernel linux
     append initrd=initrd install=nfs://192.168.1.1:/srv/install/x86/OS_version/SP_version/cd1
    
    display message
    implicit 0
    prompt 1
    timeout 5
  3. Edit the file /srv/tftpboot/BIOS/x86/message to reflect the default file you just edited.

    Welcome to the Installer Environment!
    
    To start the installation enter 'install' and press <return>.
    
    Available boot options:
     harddisk   - Boot from Hard Disk (this is default)
     install    - Installation

6.3.3 UEFI Files for x86

In this procedure replace OS_version and SP_version with the used operating system and service pack version. For example use sles12 and sp3.

  1. Copy all required grub2 files for UEFI booting.

    root # cd /srv/install/x86/OS_version/SP_version/cd1/EFI/BOOT
    root # cp -a bootx64.efi grub.efi MokManager.efi /srv/tftpboot/EFI/x86/
  2. Copy the kernel and initrd files to the directory structure.

    root # cd /srv/install/x86/OS_version/SP_version/cd1/boot/x86_64/loader/
    root # cp -a linux initrd /srv/tftpboot/EFI/x86/boot
  3. Create the file /srv/tftpboot/EFI/x86/grub.cfg with at least the following content:

    set timeout=5
    menuentry 'Install OS_version SP_version for x86_64' {
      linuxefi /EFI/x86/boot/linux \
       install=nfs://192.168.1.1/srv/install/x86/OS_version/SP_version/cd1
      initrdefi /EFI/x86/boot/initrd
    }

6.3.4 UEFI Files for AArch64

In this procedure replace OS_version and SP_version with the used operating system and service pack version. For example use sles12 and sp3.

  1. This is done in a way very similar to the x86_64 EFI environment. Start by copying the files required for UEFI booting of a grub2-efi environment.

    root # cd /srv/install/aarch64/OS_version/SP_version/cd1/EFI/BOOT
    root # cp -a bootaa64.efi /srv/tftpboot/EFI/aarch64/
  2. Copy the kernel and initrd to the directory structure.

    root # cd /srv/install/aarch64/OS_version/SP_version/cd1/boot/aarch64
    root # cp -a linux initrd /srv/tftpboot/EFI/aarch64/boot
  3. Now create the file /srv/tftpboot/EFI/grub.cfg and add the following content:

    menuentry 'Install OS_version SP_version' {
      linux /EFI/aarch64/boot/linux network=1 usessh=1 sshpassword="suse" \
       install=nfs://192.168.1.1:/srv/install/aarch64/OS_version/SP_version/cd1 \
       console=ttyAMA0,115200n8
      initrd /EFI/aarch64/boot/initrd
    }

    This addition to the configuration file has a few other options to enable the serial console and allow installation via SSH, which is helpful for systems that do not have a standard KVM console interface. You will notice that this is set up for a specific ARM platform.

6.4 PXELINUX Configuration Options

The options listed here are a subset of all the options available for the PXELINUX configuration file.

APPEND OPTIONS

Add one or more options to the kernel command line. These are added for both automatic and manual boots. The options are added at the very beginning of the kernel command line, usually permitting explicitly entered kernel options to override them.

APPEND -

Append nothing. APPEND with a single hyphen as argument in a LABEL section can be used to override a global APPEND.

DEFAULT KERNEL_OPTIONS...

Sets the default kernel command line. If PXELINUX boots automatically, it acts as if the entries after DEFAULT had been typed in at the boot prompt, except the auto option is automatically added, indicating an automatic boot.

If no configuration file exists or no DEFAULT entry is defined in the configuration file, the default is the kernel name linux with no options.

IFAPPEND FLAG

Adds a specific option to the kernel command line depending on the FLAG value. The IFAPPEND option is available only on PXELINUX. FLAG expects a value, described in Table 6.1, “Generated and Added Kernel Command Line Options from IFAPPEND:

Table 6.1: Generated and Added Kernel Command Line Options from IFAPPEND

Argument

Generated Kernel Command Line / Description

1

ip=CLIENT_IP:BOOT_SERVER_IP:GW_IP:NETMASK

The placeholders are replaced based on the input from the DHCP/BOOTP or PXE boot server.

Note, this option is not a substitute for running a DHCP client in the booted system. Without regular renewals, the lease acquired by the PXE BIOS will expire, making the IP address available for reuse by the DHCP server.

2

BOOTIF=MAC_ADDRESS_OF_BOOT_INTERFACE

This option is useful if you want to avoid timeouts when the installation server probes one LAN interface after the other until it gets a reply from a DHCP server. This option allows an initrd program to determine from which interface the system has been booted. linuxrc reads this option and uses this network interface.

4

SYSUUID=SYSTEM_UUID

Adds UUIDs in lowercase hexadecimals, see /usr/share/doc/packages/syslinux/pxelinux.txt

LABEL LABEL KERNEL IMAGE APPEND OPTIONS...

Indicates that if LABEL is entered as the kernel to boot, PXELINUX should instead boot IMAGE and the specified APPEND options should be used instead of the ones specified in the global section of the file (before the first LABEL command). The default for IMAGE is the same as LABEL and, if no APPEND is given, the default is to use the global entry (if any). Up to 128 LABEL entries are permitted.

PXELINUX uses the following syntax:

label MYLABEL
  kernel MYKERNEL
  append MYOPTIONS

Labels are mangled as if they were file names and they must be unique after mangling. For example, the two labels v2.6.30 and v2.6.31 would not be distinguishable under PXELINUX because both mangle to the same DOS file name.

The kernel does not need to be a Linux kernel. It can also be a boot sector or a COMBOOT file.

LOCALBOOT TYPE

On PXELINUX, specifying LOCALBOOT 0 instead of a KERNEL option means invoking this particular label and causes a local disk boot instead of a kernel boot.

Argument

Description

0

Perform a normal boot

4

Perform a local boot with the Universal Network Driver Interface (UNDI) driver still resident in memory

5

Perform a local boot with the entire PXE stack, including the UNDI driver, still resident in memory

All other values are undefined. If you do not know what the UNDI or PXE stacks are, specify 0.

TIMEOUT TIME-OUT

Indicates how long to wait at the boot prompt until booting automatically, in units of 1/10 second. The time-out is canceled when the user types anything on the keyboard, assuming the user will complete the command begun. A time-out of zero disables the time-out completely (this is also the default). The maximum possible time-out value is 35996 (just less than one hour).

PROMPT flag_val

If flag_val is 0, displays the boot prompt only if Shift or Alt is pressed or Caps Lock or Scroll Lock is set (this is the default). If flag_val is 1, always displays the boot prompt.

F2  FILENAME
F1  FILENAME
..etc...
F9  FILENAME
F10 FILENAME

Displays the indicated file on the screen when a function key is pressed at the boot prompt. This can be used to implement preboot online help (presumably for the kernel command line options). For backward compatibility with earlier releases, F10 can be also entered as F0. Note that there is currently no way to bind file names to F11 and F12.

6.5 Preparing the Target System for PXE Boot

Prepare the system's BIOS for PXE boot by including the PXE option in the BIOS boot order.

Warning
Warning: BIOS Boot Order

Do not place the PXE option ahead of the hard disk boot option in the BIOS. Otherwise this system would try to re-install itself every time you boot it.

6.6 Preparing the Target System for Wake on LAN

Wake on LAN (WOL) requires the appropriate BIOS option to be enabled prior to the installation. Also, note down the MAC address of the target system. This data is needed to initiate Wake on LAN.

6.7 Wake on LAN

Wake on LAN allows a machine to be turned on by a special network packet containing the machine's MAC address. Because every machine in the world has a unique MAC identifier, you do not need to worry about accidentally turning on the wrong machine.

Important
Important: Wake on LAN across Different Network Segments

If the controlling machine is not located on the same network segment as the installation target that should be awakened, either configure the WOL requests to be sent as multicasts or remotely control a machine on that network segment to act as the sender of these requests.

Users of SUSE Linux Enterprise Server can use a YaST module called WOL to easily configure Wake on LAN. Users of other versions of SUSE Linux-based operating systems can use a command line tool.

6.8 Wake on LAN with YaST

  1. Log in as root.

  2. Start YaST › Network Services › WOL.

  3. Click Add and enter the host name and MAC address of the target system.

  4. To turn on this machine, select the appropriate entry and click Wake up.

6.9 Booting from CD or USB Drive Instead of PXE

You can also use a CD, DVD or USB drive with a small system image instead of booting via PXE. The necessary files will be loaded via NFS when the kernel and initrd are loaded. A bootable image can be created with mksusecd. This can be useful if the target machine does not support PXE boot.

Install it with sudo zypper in mksusecd. Use the following command to create a bootable ISO image:

tux > mksusecd --create image.iso \
--net=nfs://192.168.1.1:/srv/install/ARCH/OS_VERSION/SP_VERSION/cd1  \
/srv/tftpboot/EFI/ARCH/boot

Replace ARCH with the folder corresponding to the target system architecture. Also replace OS_version and SP_version according to your paths in Section 6.3, “Installing Files on TFTP Server”.

Instead of using an NFS server for the --net option, it is also possible to use an HTTP repository, for example the openSUSE repository:

tux > mksusecd --create image.iso \
--net=http://download.opensuse.org/tumbleweed/repo/oss/suse \
/srv/tftpboot/EFI/ARCH/boot

The image.iso can be written to a DVD or CD, or using dd to a USB stick:

root # dd if=image.iso of=/dev/USB_DEVICE

Replace USB_DEVICE with the device name of your USB stick. Check the device name thoroughly to ensure that you are not accidentally destroying data on another drive.

Part IV Remote Installation

7 Remote Installation

SUSE® Linux Enterprise Desktop can be installed in different ways. In addition to the usual media installation covered in Chapter 3, Installation with YaST, you can choose from various network-based approaches or even opt for an unattended installation of SUSE Linux Enterprise Desktop.

7 Remote Installation

  • Filename: deployment_remote.xml
  • ID: cha.deployment.remoteinst

SUSE® Linux Enterprise Desktop can be installed in different ways. In addition to the usual media installation covered in Chapter 3, Installation with YaST, you can choose from various network-based approaches or even opt for an unattended installation of SUSE Linux Enterprise Desktop.

Each method is introduced by means of two short checklists: one listing the prerequisites for that method and the other illustrating the basic procedure. More detail is then provided for all the techniques used in these installation scenarios.

Note
Note: Terminology

In the following sections, the system to hold your new SUSE Linux Enterprise Desktop installation is called target system or installation target. The term repository (previously called installation source) is used for all sources of installation data. This includes physical media, such as CD and DVD, and network servers distributing the installation data in your network.

7.1 Installation Scenarios for Remote Installation

This section introduces the most common installation scenarios for remote installations. For each scenario, carefully check the list of prerequisites and follow the procedure outlined for that scenario. If in need of detailed instructions for a particular step, follow the links provided for each one of them.

7.1.1 Simple Remote Installation via VNC—Static Network Configuration

This type of installation still requires some degree of physical access to the target system to boot for installation. The installation is controlled by a remote workstation using VNC to connect to the installation program. User interaction is required as with the manual installation in Chapter 3, Installation with YaST.

For this type of installation, make sure that the following requirements are met:

  • A repository, either remote or local:

    • Remote repository: NFS, HTTP, FTP, TFTP, or SMB with working network connection.

    • Local repository, for example a DVD.

  • Target system with working network connection.

  • Controlling system with working network connection and VNC viewer software.

  • Physical boot medium (CD, DVD, or flash disk) for booting the target system.

  • Valid static IP addresses already assigned to the repository and the controlling system.

  • Valid static IP address to assign to the target system.

To perform this kind of installation, proceed as follows:

  1. Set up the repository as described in Chapter 5, Setting Up the Server Holding the Installation Sources. Choose an NFS, HTTP, FTP, or TFTP network server. For an SMB repository, refer to Section 5.5, “Managing an SMB Repository”.

  2. Boot the target system using DVD1 of the SUSE Linux Enterprise Desktop media kit.

  3. When the boot screen of the target system appears, use the boot options prompt to set the appropriate VNC options and the address of the repository. This is described in detail in Section 7.2, “Booting the Target System for Installation”.

    The target system boots to a text-based environment, giving the network address and display number under which the graphical installation environment can be addressed by any VNC viewer application or browser. VNC installations announce themselves over OpenSLP and if the firewall settings permit. They can be found using slptool as described in Procedure 7.1, “Locating VNC installations via OpenSLP”.

  4. On the controlling workstation, open a VNC viewing application or Web browser and connect to the target system as described in Section 7.3.1, “VNC Installation”.

  5. Perform the installation as described in Chapter 3, Installation with YaST. Reconnect to the target system after it reboots for the final part of the installation.

  6. Finish the installation.

7.1.2 Simple Remote Installation via VNC—Dynamic Network Configuration

This type of installation still requires some degree of physical access to the target system to boot for installation. The network configuration is done via DHCP. The installation is controlled from a remote workstation using VNC, but configuration does require user interaction.

For this type of installation, make sure that the following requirements are met:

  • Remote repository: NFS, HTTP, FTP, or SMB with working network connection.

  • Target system with working network connection.

  • Controlling system with working network connection and VNC viewer software.

  • Boot the target system using DVD1 of the SUSE Linux Enterprise Desktop media kit.

  • Running DHCP server providing IP addresses.

To perform this kind of installation, proceed as follows:

  1. Set up the repository as described in Chapter 5, Setting Up the Server Holding the Installation Sources. Choose an NFS, HTTP, or FTP network server. For an SMB repository, refer to Section 5.5, “Managing an SMB Repository”.

  2. Boot the target system using DVD1 of the SUSE Linux Enterprise Desktop media kit.

  3. When the boot screen of the target system appears, use the boot options prompt to set the appropriate VNC options and the address of the repository. This is described in detail in Section 7.2, “Booting the Target System for Installation”.

    The target system boots to a text-based environment, giving the network address and display number under which the graphical installation environment can be addressed by any VNC viewer application or browser. VNC installations announce themselves over OpenSLP and if the firewall settings permit. They can be found using slptool as described in Procedure 7.1, “Locating VNC installations via OpenSLP”.

  4. On the controlling workstation, open a VNC viewing application or Web browser and connect to the target system as described in Section 7.3.1, “VNC Installation”.

  5. Perform the installation as described in Chapter 3, Installation with YaST. Reconnect to the target system after it reboots for the final part of the installation.

  6. Finish the installation.

7.1.3 Remote Installation via VNC—PXE Boot and Wake on LAN

This type of installation is completely hands-off. The target machine is started and booted remotely. User interaction is only needed for the actual installation. This approach is suitable for cross-site deployments.

To perform this type of installation, make sure that the following requirements are met:

  • Remote repository: NFS, HTTP, FTP, or SMB with working network connection.

  • TFTP server.

  • Running DHCP server for your network.

  • Target system capable of PXE boot, networking, and Wake on LAN, plugged in and connected to the network.

  • Controlling system with working network connection and VNC viewer software.

To perform this type of installation, proceed as follows:

  1. Set up the repository as described in Chapter 5, Setting Up the Server Holding the Installation Sources. Choose an NFS, HTTP, or FTP network server or configure an SMB repository as described in Section 5.5, “Managing an SMB Repository”.

  2. Set up a TFTP server to hold a boot image that can be pulled by the target system. This is described in Section 6.2, “Setting Up a TFTP Server”.

  3. Set up a DHCP server to provide IP addresses to all machines and reveal the location of the TFTP server to the target system. This is described in Section 6.1, “Setting Up a DHCP Server”.

  4. Prepare the target system for PXE boot. This is described in further detail in Section 6.5, “Preparing the Target System for PXE Boot”.

  5. Initiate the boot process of the target system using Wake on LAN. This is described in Section 6.7, “Wake on LAN”.

  6. On the controlling workstation, open a VNC viewing application or Web browser and connect to the target system as described in Section 7.3.1, “VNC Installation”.

  7. Perform the installation as described in Chapter 3, Installation with YaST. Reconnect to the target system after it reboots for the final part of the installation.

  8. Finish the installation.

7.1.4 Simple Remote Installation via SSH—Static Network Configuration

This type of installation still requires some degree of physical access to the target system to boot for installation and to determine the IP address of the installation target. The installation itself is entirely controlled from a remote workstation using SSH to connect to the installer. User interaction is required as with the regular installation described in Chapter 3, Installation with YaST.

For this type of installation, make sure that the following requirements are met:

  • Remote repository: NFS, HTTP, FTP, or SMB with working network connection.

  • Target system with working network connection.

  • Controlling system with working network connection and working SSH client software.

  • Boot the target system using DVD1 of the SUSE Linux Enterprise Desktop media kit.

  • Valid static IP addresses already assigned to the repository and the controlling system.

  • Valid static IP address to assign to the target system.

To perform this kind of installation, proceed as follows:

  1. Set up the repository as described in Chapter 5, Setting Up the Server Holding the Installation Sources. Choose an NFS, HTTP, or FTP network server. For an SMB repository, refer to Section 5.5, “Managing an SMB Repository”.

  2. Boot the target system using DVD1 of the SUSE Linux Enterprise Desktop media kit.

  3. When the boot screen of the target system appears, use the boot options prompt to set the appropriate parameters for network connection, address of the repository, and SSH enablement. This is described in detail in Section 7.2.2, “Using Custom Boot Options”.

    The target system boots to a text-based environment, giving the network address under which the graphical installation environment can be addressed by any SSH client.

  4. On the controlling workstation, open a terminal window and connect to the target system as described in Section 7.3.2.2, “Connecting to the Installation Program”.

  5. Perform the installation as described in Chapter 3, Installation with YaST. Reconnect to the target system after it reboots for the final part of the installation.

  6. Finish the installation.

7.1.5 Simple Remote Installation via SSH—Dynamic Network Configuration

This type of installation still requires some degree of physical access to the target system to boot for installation and determine the IP address of the installation target. The installation is controlled from a remote workstation using SSH , but configuration does require user interaction.

Note
Note: Avoid Lost Connections After the Second Step (Installation)

In the network settings dialog, check the Traditional Method with ifup and avoid NetworkManager. If not, your SSH connection will be lost during installation. Reset the settings to User Controlled with NetworkManager after your installation has finished.

For this type of installation, make sure that the following requirements are met:

  • A repository, either remote or local:

    • Remote repository: NFS, HTTP, FTP, TFTP, or SMB with working network connection.

    • Local repository, for example a DVD.

  • Target system with working network connection.

  • Controlling system with working network connection and working SSH client software.

  • Physical boot medium (CD, DVD, or flash disk) for booting the target system.

  • Running DHCP server providing IP addresses.

To perform this kind of installation, proceed as follows:

  1. Set up the repository source as described in Chapter 5, Setting Up the Server Holding the Installation Sources. Choose an NFS, HTTP, or FTP network server. For an SMB repository, refer to Section 5.5, “Managing an SMB Repository”.

  2. Boot the target system using DVD1 of the SUSE Linux Enterprise Desktop media kit.

  3. When the boot screen of the target system appears, use the boot options prompt to pass the appropriate parameters for network connection, location of the installation source, and SSH enablement. See Section 7.2.2, “Using Custom Boot Options” for detailed instructions on the use of these parameters.

    The target system boots to a text-based environment, giving you the network address under which the graphical installation environment can be addressed by any SSH client.

  4. On the controlling workstation, open a terminal window and connect to the target system as described in Section 7.3.2.2, “Connecting to the Installation Program”.

  5. Perform the installation as described in Chapter 3, Installation with YaST. Reconnect to the target system after it reboots for the final part of the installation.

  6. Finish the installation.

7.1.6 Remote Installation via SSH—PXE Boot and Wake on LAN

This type of installation is completely hands-off. The target machine is started and booted remotely.

To perform this type of installation, make sure that the following requirements are met:

  • Remote repository: NFS, HTTP, FTP, or SMB with working network connection.

  • TFTP server.

  • Running DHCP server for your network, providing a static IP to the host to install.

  • Target system capable of PXE boot, networking, and Wake on LAN, plugged in and connected to the network.

  • Controlling system with working network connection and SSH client software.

To perform this type of installation, proceed as follows:

  1. Set up the repository as described in Chapter 5, Setting Up the Server Holding the Installation Sources. Choose an NFS, HTTP, or FTP network server. For the configuration of an SMB repository, refer to Section 5.5, “Managing an SMB Repository”.

  2. Set up a TFTP server to hold a boot image that can be pulled by the target system. This is described in Section 6.2, “Setting Up a TFTP Server”.

  3. Set up a DHCP server to provide IP addresses to all machines and reveal the location of the TFTP server to the target system. This is described in Section 6.1, “Setting Up a DHCP Server”.

  4. Prepare the target system for PXE boot. This is described in further detail in Section 6.5, “Preparing the Target System for PXE Boot”.

  5. Initiate the boot process of the target system using Wake on LAN. This is described in Section 6.7, “Wake on LAN”.

  6. On the controlling workstation, start an SSH client and connect to the target system as described in Section 7.3.2, “SSH Installation”.

  7. Perform the installation as described in Chapter 3, Installation with YaST. Reconnect to the target system after it reboots for the final part of the installation.

  8. Finish the installation.

7.2 Booting the Target System for Installation

There are two different ways to customize the boot process for installation apart from those mentioned under Section 6.7, “Wake on LAN” and Section 6.3.1, “Preparing the Structure”. You can either use the default boot options and function keys. Alternatively, you can use the boot options prompt in the installation boot screen to specify desired boot options that the installation kernel may require for the specific hardware.

7.2.1 Using the Default Boot Options

The boot options are described in detail in Chapter 3, Installation with YaST. Generally, selecting Installation starts the installation boot process.

If problems occur, use Installation—ACPI Disabled or Installation—Safe Settings. For more information about troubleshooting the installation process, refer to Section 34.2, “Installation Problems”.

The menu bar at the bottom of the screen offers some advanced functionality needed in some setups. Using the function keys (F1 ... F12), you can specify additional options to pass to the installation routines without having to know the detailed syntax of these parameters (see Section 7.2.2, “Using Custom Boot Options”). A detailed description of the available function keys is available in Section 3.2.1.1, “The Boot Screen on Machines Equipped with Traditional BIOS”.

7.2.2 Using Custom Boot Options

Using the appropriate set of boot options helps simplify your installation procedure. Many parameters can also be configured later using the linuxrc routines, but using the boot options is easier. In some automated setups, the boot options can be provided with initrd or an info file.

The following table lists all installation scenarios mentioned in this chapter with the required parameters for booting and the corresponding boot options. Append all of them in the order they appear in this table to get one boot option string that is handed to the installation routines. For example (all in one line):

install=XXX netdevice=XXX hostip=XXX netmask=XXX vnc=XXX VNCPassword=XXX

Replace all the values XXX in this command with the values appropriate for your setup.

Chapter 3, Installation with YaST

Parameters Needed for Booting.  None

Boot Options.  None needed

Section 7.1.1, “Simple Remote Installation via VNC—Static Network Configuration”
Parameters Needed for Booting
  • Location of the installation server

  • Network device

  • IP address

  • Netmask

  • Gateway

  • VNC enablement

  • VNC password

Boot Options
  • install=(nfs,http,​ftp,smb)://PATH_TO_INSTMEDIA

  • netdevice=NETDEVICE (only needed if several network devices are available)

  • hostip=IP_ADDRESS

  • netmask=NETMASK

  • gateway=IP_GATEWAY

  • vnc=1

  • VNCPassword=PASSWORD

Section 7.1.2, “Simple Remote Installation via VNC—Dynamic Network Configuration”
Parameters Needed for Booting
  • Location of the installation server

  • VNC enablement

  • VNC password

Boot Options
  • install=(nfs,http,​ftp,smb)://PATH_TO_INSTMEDIA

  • vnc=1

  • VNCPassword=PASSWORD

Section 7.1.3, “Remote Installation via VNC—PXE Boot and Wake on LAN”
Parameters Needed for Booting
  • Location of the installation server

  • Location of the TFTP server

  • VNC enablement

  • VNC password

Boot Options.  Not applicable; process managed through PXE and DHCP

Section 7.1.4, “Simple Remote Installation via SSH—Static Network Configuration”
Parameters Needed for Booting
  • Location of the installation server

  • Network device

  • IP address

  • Netmask

  • Gateway

  • SSH enablement

  • SSH password

Boot Options
  • install=(nfs,http,​ftp,smb)://PATH_TO_INSTMEDIA

  • netdevice=NETDEVICE (only needed if several network devices are available)

  • hostip=IP_ADDRESS

  • netmask=NETMASK

  • gateway=IP_GATEWAY

  • ssh=1

  • ssh.password=PASSWORD

Section 7.1.5, “Simple Remote Installation via SSH—Dynamic Network Configuration”
Parameters Needed for Booting
  • Location of the installation server

  • SSH enablement

  • SSH password

Boot Options
  • install=(nfs,http,​ftp,smb)://PATH_TO_INSTMEDIA

  • ssh=1

  • ssh.password=PASSWORD

Section 7.1.6, “Remote Installation via SSH—PXE Boot and Wake on LAN”
  • Location of the installation server

  • Location of the TFTP server

  • SSH enablement

  • SSH password

Boot Options.  Not applicable; process managed through PXE and DHCP

Tip
Tip: More Information about linuxrc Boot Options

Find more information about the linuxrc boot options used for booting a Linux system at http://en.opensuse.org/SDB:Linuxrc.

7.2.2.1 Installing Add-On Products and Driver Updates

SUSE Linux Enterprise Desktop supports installation of add-on products, such as extensions (for example the SUSE Linux Enterprise High Availability Extension), third-party products as well as drivers or additional software. To automatically install an add-on product when deploying SUSE Linux Enterprise Desktop remotely, specify the addon=REPOSITORY parameter.

REPOSITORY needs to be a hosted repository that can be read by YaST (YaST2 or YUM (rpm-md)). ISO images are currently not supported.

Tip
Tip: Driver Updates

Driver Updates can be found at http://drivers.suse.com/. Not all driver updates are provided as repositories—some are only available as ISO images and therefore cannot be installed with the addon parameter. Instructions on how to install driver updates via ISO image are available at http://drivers.suse.com/doc/SolidDriver/Driver_Kits.html.

7.3 Monitoring the Installation Process

There are several options for remotely monitoring the installation process. If the appropriate boot options have been specified while booting for installation, either VNC or SSH can be used to control the installation and system configuration from a remote workstation.

7.3.1 VNC Installation

Using any VNC viewer software, you can remotely control the installation of SUSE Linux Enterprise Desktop from virtually any operating system. This section introduces the setup using a VNC viewer application or a Web browser.

7.3.1.1 Preparing for VNC Installation

To enable VNC on the installation target, specify the appropriate boot options at the initial boot for installation (see Section 7.2.2, “Using Custom Boot Options”). The target system boots into a text-based environment and waits for a VNC client to connect to the installation program.

The installation program announces the IP address and display number needed to connect for installation. If you have physical access to the target system, this information is provided right after the system booted for installation. Enter this data when your VNC client software prompts for it and provide your VNC password.

Because the installation target announces itself via OpenSLP, you can retrieve the address information of the installation target via an SLP browser without the need for any physical contact to the installation itself, provided your network setup and all machines support OpenSLP:

Procedure 7.1: Locating VNC installations via OpenSLP
  1. Run slptool findsrvtypes | grep vnc to get a list of all services offering VNC. The VNC installation targets should be available under a service named YaST.installation.suse.

  2. Run slptool findsrvs YaST.installation.suse to get a list of installations available. Use the IP address and the port (usually 5901) provided with your VNC viewer.

7.3.1.2 Connecting to the Installation Program

To connect to a VNC server (the installation target in this case), start an independent VNC viewer application on any operating system.

Using VNC, you can control the installation of a Linux system from any other operating system, including other Linux flavors, Windows, or macOS.

On a Linux machine, make sure that the package tightvnc is installed. On a Windows machine, install the Windows port of this application, which can be obtained at the TightVNC home page (http://www.tightvnc.com/download.html).

To connect to the installation program running on the target machine, proceed as follows:

  1. Start the VNC viewer.

  2. Enter the IP address and display number of the installation target as provided by the SLP browser or the installation program itself:

    IP_ADDRESS:DISPLAY_NUMBER

    A window opens on your desktop displaying the YaST screens as in a normal local installation.

7.3.2 SSH Installation

Using SSH, you can remotely control the installation of your Linux machine using any SSH client software.

7.3.2.1 Preparing for SSH Installation

In addition to installing the required software package (OpenSSH for Linux and PuTTY for Windows), you need to specify the appropriate boot options to enable SSH for installation. See Section 7.2.2, “Using Custom Boot Options” for details. OpenSSH is installed by default on any SUSE Linux–based operating system.

7.3.2.2 Connecting to the Installation Program

  1. Retrieve the installation target's IP address. If you have physical access to the target machine, take the IP address the installation routine provides in the console after the initial boot. Otherwise take the IP address that has been assigned to this particular host in the DHCP server configuration.

  2. In a command line, enter the following command:

    ssh -X root@
    ip_address_of_target

    Replace IP_ADDRESS_OF_TARGET with the actual IP address of the installation target.

  3. When prompted for a user name, enter root.

  4. When prompted for the password, enter the password that has been set with the SSH boot option. After you have successfully authenticated, a command line prompt for the installation target appears.

  5. Enter yast to launch the installation program. A window opens showing the normal YaST screens as described in Chapter 3, Installation with YaST.

Part V Initial System Configuration

8 Setting Up Hardware Components with YaST

YaST allows you to configure hardware items such as audio hardware, your system keyboard layout or printers.

9 Advanced Disk Setup

Sophisticated system configurations require specific disk setups. All common partitioning tasks can be done with YaST. To get persistent device naming with block devices, use the block devices below /dev/disk/by-id or /dev/disk/by-uuid. Logical Volume Management (LVM) is a disk partitioning scheme t…

10 Installing or Removing Software

Use YaST's software management module to search for software components you want to add or remove. YaST resolves all dependencies for you. To install packages not shipped with the installation media, add additional software repositories to your setup and let YaST manage them. Keep your system up-to-date by managing software updates with the update applet.

11 Installing Modules, Extensions, and Third Party Add-On Products

Modules and extensions add parts or functionality to the system. Modules are fully supported parts of SUSE Linux Enterprise Desktop with a different life cycle and update timeline. They are a set of packages, have a clearly defined scope and are delivered via online channel only.

Extensions, such as the Workstation Extension or the High Availability Extension, add extra functionality to the system and require an own registration key that is liable for costs. Extensions are delivered via online channel or physical media. Registering at the SUSE Customer Center or a local registration server is a prerequisite for subscribing to the online channels. The Package Hub (Section 11.5, “SUSE Package Hub”) and SUSE Software Development Kit (Section 11.4, “SUSE Software Development Kit (SDK) 12 SP3) extensions are exceptions which do not require a registration key and are not covered by SUSE support agreements.

A list of modules and extensions for your product is available after having registered your system at SUSE Customer Center or a local registration server. If you skipped the registration step during the installation, you can register your system at any time using the SUSE Customer Center Configuration module in YaST. For details, refer to Section 17.9, “Registering Your System”.

Some add-on products are also provided by third parties, for example, binary-only drivers that are needed by certain hardware to function properly. If you have such hardware, refer to the release notes for more information about availability of binary drivers for your system. The release notes are available from http://www.suse.com/releasenotes/, from YaST or from /usr/share/doc/release-notes/ in your installed system.

12 Installing Multiple Kernel Versions

SUSE Linux Enterprise Desktop supports the parallel installation of multiple kernel versions. When installing a second kernel, a boot entry and an initrd are automatically created, so no further manual configuration is needed. When rebooting the machine, the newly added kernel is available as an additional boot option.

Using this functionality, you can safely test kernel updates while being able to always fall back to the proven former kernel. To do this, do not use the update tools (such as the YaST Online Update or the updater applet), but instead follow the process described in this chapter.

13 Managing Users with YaST

During installation, you could have created a local user for your system. With the YaST module User and Group Management you can add more users or edit existing ones. It also lets you configure your system to authenticate users with a network server.

14 Changing Language and Country Settings with YaST

Working in different countries or having to work in a multilingual environment requires your computer to be set up to support this. SUSE® Linux Enterprise Desktop can handle different locales in parallel. A locale is a set of parameters that defines the language and country settings reflected in the…

8 Setting Up Hardware Components with YaST

  • Filename: yast2_hw.xml
  • ID: cha.y2.hw
Abstract

YaST allows you to configure hardware items such as audio hardware, your system keyboard layout or printers.

Note
Note: Graphics Card, Monitor, Mouse and Keyboard Settings

Graphics card, monitor, mouse and keyboard can be configured with GNOME tools. See Section 3.3, “Hardware” for details.

8.1 Setting Up Your System Keyboard Layout

  • Filename: yast2_keymouse.xml
  • ID: cha.y2.hw.keym

The YaST System Keyboard Layout module lets you define the default keyboard layout for the system (also used for the console). Users can modify the keyboard layout in their individual X sessions, using the desktop's tools.

  1. Start the YaST System Keyboard Configuration dialog by clicking Hardware › System Keyboard Layout in YaST. Alternatively, start the module from the command line with sudo yast2 keyboard.

  2. Select the desired Keyboard Layout from the list.

  3. Optionally, you can also define the keyboard repeat rate or keyboard delay rate in the Expert Settings.

  4. Try the selected settings in the Test text box.

  5. If the result is as expected, confirm your changes and close the dialog. The settings are written to /etc/sysconfig/keyboard.

8.2 Setting Up Sound Cards

  • Filename: yast2_sound.xml
  • ID: sec.y2.hw.sound

YaST detects most sound cards automatically and configures them with the appropriate values. To change the default settings, or to set up a sound card that could not be configured automatically, use the YaST sound module. There, you can also set up additional sound cards or switch their order.

To start the sound module, start YaST and click Hardware › Sound. Alternatively, start the Sound Configuration dialog directly by running yast2 sound & as user root from a command line.

The dialog shows all sound cards that were detected.

Procedure 8.1: Configuring Sound Cards

If you have added a new sound card or YaST could not automatically configure an existing sound card, follow the steps below. For configuring a new sound card, you need to know your sound card vendor and model. If in doubt, refer to your sound card documentation for the required information. For a reference list of sound cards supported by ALSA with their corresponding sound modules, see http://www.alsa-project.org/main/index.php/Matrix:Main.

During configuration, you can choose between the following setup options:

Quick Automatic Setup

You are not required to go through any of the further configuration steps—the sound card is configured automatically. You can set the volume or any options you want to change later.

Normal Setup

Allows you to adjust the output volume and play a test sound during the configuration.

Advanced setup with possibility to change options

For experts only. Allows you to customize all parameters of the sound card.

Important
Important: Advanced Configuration

Only use this option if you know exactly what you are doing. Otherwise leave the parameters untouched and use the normal or the automatic setup options.

  1. Start the YaST sound module.

  2. To configure a detected, but Not Configured sound card, select the respective entry from the list and click Edit.

    To configure a new sound card, click Add. Select your sound card vendor and model and click Next.

  3. Choose one of the setup options and click Next.

  4. If you have chosen Normal Setup, you can now Test your sound configuration and make adjustments to the volume. You should start at about ten percent volume to avoid damage to your hearing or the speakers.

  5. If all options are set according to your wishes, click Next.

    The Sound Configuration dialog shows the newly configured or modified sound card.

  6. To remove a sound card configuration that you no longer need, select the respective entry and click Delete.

  7. Click OK to save the changes and leave the YaST sound module.

Procedure 8.2: Modifying Sound Card Configurations
  1. To change the configuration of an individual sound card (for experts only!), select the sound card entry in the Sound Configuration dialog and click Edit.

    This takes you to the Sound Card Advanced Options where you can fine-tune several parameters. For more information, click Help.

  2. To adjust the volume of an already configured sound card or to test the sound card, select the sound card entry in the Sound Configuration dialog and click Other. Select the respective menu item.

    Note
    Note: YaST Mixer

    The YaST mixer settings provide only basic options. They are intended for troubleshooting (for example, if the test sound is not audible). Access the YaST mixer settings from Other › Volume. For everyday use and fine-tuning of sound options, use the mixer applet provided by your desktop or the alsasound command line tool.

  3. For playback of MIDI files, select Other › Start Sequencer.

  4. When a supported sound card is detected, you can install SoundFonts for playback of MIDI files:

    1. Insert the original driver CD-ROM into your CD or DVD drive.

    2. Select Other › Install SoundFonts to copy SF2 SoundFonts™ to your hard disk. The SoundFonts are saved in the directory /usr/share/sfbank/creative/.

  5. If you have configured more than one sound card in your system you can adjust the order of your sound cards. To set a sound card as primary device, select the sound card in the Sound Configuration and click Other › Set as the Primary Card. The sound device with index 0 is the default device and thus used by the system and the applications.

  6. By default, SUSE Linux Enterprise Desktop uses the PulseAudio sound system. It is an abstraction layer that helps to mix multiple audio streams, bypassing any restrictions the hardware may have. To enable or disable the PulseAudio sound system, click Other › PulseAudio Configuration. If enabled, PulseAudio daemon is used to play sounds. Disable PulseAudio Support to use something else system-wide.

The volume and configuration of all sound cards are saved when you click OK and leave the YaST sound module. The mixer settings are saved to the file /etc/asound.state. The ALSA configuration data is appended to the end of the file /etc/modprobe.d/sound and written to /etc/sysconfig/sound.

8.3 Setting Up a Printer

  • Filename: yast2_printer.xml
  • ID: sec.y2.hw.print

YaST can be used to configure a local printer connected to your machine via USB and to set up printing with network printers. It is also possible to share printers over the network. Further information about printing (general information, technical details, and troubleshooting) is available in Chapter 18, Printer Operation.

In YaST, click Hardware › Printer to start the printer module. By default it opens in the Printer Configurations view, displaying a list of all printers that are available and configured. This is especially useful when having access to a lot of printers via the network. From here you can also Print a Test Page and configure printers.

Note
Note: Starting CUPS

To be able to print from your system, CUPS must run. In case it is not running, you are asked to start it. Answer with Yes, or you cannot configure printing. In case CUPS is not started at boot time, you will also be asked to enable this feature. It is recommended to say Yes, otherwise CUPS would need to be started manually after each reboot.

8.3.1 Configuring Printers

Usually a USB printer is automatically detected. There are two possible reasons it is not automatically detected:

  • The USB printer is switched off.

  • The communication between printer and computer is not possible. Check the cable and the plugs to make sure that the printer is properly connected. If this is the case, the problem may not be printer-related, but rather a USB-related problem.

Configuring a printer is a three-step process: specify the connection type, choose a driver, and name the print queue for this setup.

For many printer models, several drivers are available. When configuring the printer, YaST defaults to those marked recommended as a general rule. Normally it is not necessary to change the driver. However, if you want a color printer to print only in black and white, you can use a driver that does not support color printing. If you experience performance problems with a PostScript printer when printing graphics, try to switch from a PostScript driver to a PCL driver (provided your printer understands PCL).

If no driver for your printer is listed, try to select a generic driver with an appropriate standard language from the list. Refer to your printer's documentation to find out which language (the set of commands controlling the printer) your printer understands. If this does not work, refer to Section 8.3.1.1, “Adding Drivers with YaST” for another possible solution.

A printer is never used directly, but always through a print queue. This ensures that simultaneous jobs can be queued and processed one after the other. Each print queue is assigned to a specific driver, and a printer can have multiple queues. This makes it possible to set up a second queue on a color printer that prints black and white only, for example. Refer to Section 18.1, “The CUPS Workflow” for more information about print queues.

Procedure 8.3: Adding a New Printer
  1. Start the YaST printer module with Hardware › Printer.

  2. In the Printer Configurations screen click Add.

  3. If your printer is already listed under Specify the Connection, proceed with the next step. Otherwise, try to Detect More or start the Connection Wizard.

  4. In the text box under Find and Assign a Driver enter the vendor name and the model name and click Search for.

  5. Choose a driver that matches your printer. It is recommended to choose the driver listed first. If no suitable driver is displayed:

    1. Check your search term

    2. Broaden your search by clicking Find More

    3. Add a driver as described in Section 8.3.1.1, “Adding Drivers with YaST”

  6. Specify the Default paper size.

  7. In the Set Arbitrary Name field, enter a unique name for the print queue.

  8. The printer is now configured with the default settings and ready to use. Click OK to return to the Printer Configurations view. The newly configured printer is now visible in the list of printers.

8.3.1.1 Adding Drivers with YaST

Not all printer drivers available for SUSE Linux Enterprise Desktop are installed by default. If no suitable driver is available in the Find and Assign a Driver dialog when adding a new printer install a driver package containing drivers for your printers:

Procedure 8.4: Installing Additional Driver Packages
  1. Start the YaST printer module with Hardware › Printer.

  2. In the Printer Configurations screen, click Add.

  3. In the Find and Assign a Driver section, click Driver Packages.

  4. Choose one or more suitable driver packages from the list. Do not specify the path to a printer description file.

  5. Choose OK and confirm the package installation.

  6. To directly use these drivers, proceed as described in Procedure 8.3, “Adding a New Printer”.

PostScript printers do not need printer driver software. PostScript printers need only a PostScript Printer Description (PPD) file which matches the particular model. PPD files are provided by the printer manufacturer.

If no suitable PPD file is available in the Find and Assign a Driver dialog when adding a PostScript printer install a PPD file for your printer:

Several sources for PPD files are available. It is recommended to first try additional driver packages that are shipped with SUSE Linux Enterprise Desktop but not installed by default (see below for installation instructions). If these packages do not contain suitable drivers for your printer, get PPD files directly from your printer vendor or from the driver CD of a PostScript printer. For details, see Section 18.8.2, “No Suitable PPD File Available for a PostScript Printer”. Alternatively, find PPD files at http://www.linuxfoundation.org/collaborate/workgroups/openprinting/database/databaseintro, the OpenPrinting.org printer database. When downloading PPD files from OpenPrinting, keep in mind that it always shows the latest Linux support status, which is not necessarily met by SUSE Linux Enterprise Desktop.

Procedure 8.5: Adding a PPD file for PostScript Printers
  1. Start the YaST printer module with Hardware › Printer.

  2. In the Printer Configurations screen, click Add.

  3. In the Find and Assign a Driver section, click Driver Packages.

  4. Enter the full path to the PPD file into the text box under Make a Printer Description File Available.

  5. Click OK to return to the Add New Printer Configuration screen.

  6. To directly use this PPD file, proceed as described in Procedure 8.3, “Adding a New Printer”.

8.3.1.2 Editing a Local Printer Configuration

By editing an existing configuration for a printer you can change basic settings such as connection type and driver. It is also possible to adjust the default settings for paper size, resolution, media source, etc. You can change identifiers of the printer by altering the printer description or location.

  1. Start the YaST printer module with Hardware › Printer.

  2. In the Printer Configurations screen, choose a local printer configuration from the list and click Edit.

  3. Change the connection type or the driver as described in Procedure 8.3, “Adding a New Printer”. This should only be necessary in case you have problems with the current configuration.

  4. Optionally, make this printer the default by checking Default Printer.

  5. Adjust the default settings by clicking All Options for the Current Driver. To change a setting, expand the list of options by clicking the relative + sign. Change the default by clicking an option. Apply your changes with OK.

8.3.2 Configuring Printing via the Network with YaST

Network printers are not detected automatically. They must be configured manually using the YaST printer module. Depending on your network setup, you can print to a print server (CUPS, LPD, SMB, or IPX) or directly to a network printer (preferably via TCP). Access the configuration view for network printing by choosing Printing via Network from the left pane in the YaST printer module.

8.3.2.1 Using CUPS

In a Linux environment CUPS is usually used to print via the network. The simplest setup is to only print via a single CUPS server which can directly be accessed by all clients. Printing via more than one CUPS server requires a running local CUPS daemon that communicates with the remote CUPS servers.

Important
Important: Browsing Network Print Queues

CUPS servers announce their print queues over the network either via the traditional CUPS browsing protocol or via Bonjour/DND-SD. Clients need to be able to browse these lists, so users can select specific printers to send their print jobs to. To be able to browse network print queues, the service cups-browsed provided by the package cups-filters-cups-browsed must run on all clients that print via CUPS servers. cups-browsed is started automatically when configuring network printing with YaST.

In case browsing does not work after having started cups-browsed, the CUPS server(s) probably announce the network print queues via Bonjour/DND-SD. In this case you need to additionally install the package avahi and start the associated service with sudo systemctl start avahi-daemon on all clients.

Procedure 8.6: Printing via a Single CUPS Server
  1. Start the YaST printer module with Hardware › Printer.

  2. From the left pane, launch the Print via Network screen.

  3. Check Do All Your Printing Directly via One Single CUPS Server and specify the name or IP address of the server.

  4. Click Test Server to make sure you have chosen the correct name or IP address.

  5. Click OK to return to the Printer Configurations screen. All printers available via the CUPS server are now listed.

Procedure 8.7: Printing via Multiple CUPS Servers
  1. Start the YaST printer module with Hardware › Printer.

  2. From the left pane, launch the Print via Network screen.

  3. Check Accept Printer Announcements from CUPS Servers.

  4. Under General Settings specify which servers to use. You may accept connections from all networks available or from specific hosts. If you choose the latter option, you need to specify the host names or IP addresses.

  5. Confirm by clicking OK and then Yes when asked to start a local CUPS server. After the server has started YaST will return to the Printer Configurations screen. Click Refresh list to see the printers detected by now. Click this button again, in case more printer are to be available.

8.3.2.2 Using Print Servers other than CUPS

If your network offers print services via print servers other than CUPS, start the YaST printer module with Hardware › Printer and launch the Print via Network screen from the left pane. Start the Connection Wizard and choose the appropriate Connection Type. Ask your network administrator for details on configuring a network printer in your environment.

8.3.3 Sharing Printers Over the Network

Printers managed by a local CUPS daemon can be shared over the network and so turn your machine into a CUPS server. Usually you share a printer by enabling CUPS' so-called browsing mode. If browsing is enabled, the local print queues are made available on the network for listening to remote CUPS daemons. It is also possible to set up a dedicated CUPS server that manages all print queues and can directly be accessed by remote clients. In this case it is not necessary to enable browsing.

Procedure 8.8: Sharing Printers
  1. Start the YaST printer module with Hardware › Printer.

  2. Launch the Share Printers screen from the left pane.

  3. Select Allow Remote Access. Also check For computers within the local network and enable browsing mode by also checking Publish printers by default within the local network.

  4. Click OK to restart the CUPS server and to return to the Printer Configurations screen.

  5. Regarding CUPS and firewall settings, see http://en.opensuse.org/SDB:CUPS_and_SANE_Firewall_settings.

8.4 Setting Up a Scanner

  • Filename: yast2_scanner.xml
  • ID: sec.y2.hw.scan

You can configure a USB or SCSI scanner with YaST. The sane-backends package contains hardware drivers and other essentials needed to use a scanner. If you own an HP All-In-One device, see Section 8.4.1, “Configuring an HP All-In-One Device”, instructions on how to configure a network scanner are available at Section 8.4.3, “Scanning over the Network”.

Procedure 8.9: Configuring a USB or SCSI Scanner
  1. Connect your USB or SCSI scanner to your computer and turn it on.

  2. Start YaST and select Hardware › Scanner. YaST builds the scanner database and tries to detect your scanner model automatically.

    If a USB or SCSI scanner is not properly detected, try Other › Restart Detection.

  3. To activate the scanner select it from the list of detected scanners and click Edit.

  4. Choose your model form the list and click Next and Finish.

  5. Use Other › Test to make sure you have chosen the correct driver.

  6. Leave the configuration screen with OK.

8.4.1 Configuring an HP All-In-One Device

An HP All-In-One device can be configured with YaST even if it is made available via the network. If you own a USB HP All-In-One device, start configuring as described in Procedure 8.9, “Configuring a USB or SCSI Scanner”. If it is detected properly and the Test succeeds, it is ready to use.

If your USB device is not properly detected, or your HP All-In-One device is connected to the network, run the HP Device Manager:

  1. Start YaST and select Hardware › Scanner. YaST loads the scanner database.

  2. Start the HP Device Manager with Other › Run hp-setup and follow the on-screen instructions. After having finished the HP Device Manager, the YaST scanner module automatically restarts the auto detection.

  3. Test it by choosing Other › Test.

  4. Leave the configuration screen with OK.

8.4.2 Sharing a Scanner over the Network

SUSE Linux Enterprise Desktop allows the sharing of a scanner over the network. To do so, configure your scanner as follows:

  1. Configure the scanner as described in Section 8.4, “Setting Up a Scanner”.

  2. Choose Other › Scanning via Network.

  3. Enter the host names of the clients (separated by a comma) that should be allowed to use the scanner under Server Settings › Permitted Clients for saned and leave the configuration dialog with OK.

8.4.3 Scanning over the Network

To use a scanner that is shared over the network, proceed as follows:

  1. Start YaST and select Hardware › Scanner.

  2. Open the network scanner configuration menu by Other › Scanning via Network.

  3. Enter the host name of the machine the scanner is connected to under Client Settings › Servers Used for the net Metadriver

  4. Leave with OK. The network scanner is now listed in the Scanner Configuration window and is ready to use.

9 Advanced Disk Setup

  • Filename: advanced_disksetup.xml
  • ID: cha.advdisk

Sophisticated system configurations require specific disk setups. All common partitioning tasks can be done with YaST. To get persistent device naming with block devices, use the block devices below /dev/disk/by-id or /dev/disk/by-uuid. Logical Volume Management (LVM) is a disk partitioning scheme that is designed to be much more flexible than the physical partitioning used in standard setups. Its snapshot functionality enables easy creation of data backups. Redundant Array of Independent Disks (RAID) offers increased data integrity, performance, and fault tolerance. SUSE Linux Enterprise Desktop also supports multipath I/O . There is also the option to use iSCSI as a networked disk.

9.1 Using the YaST Partitioner

  • Filename: yast2_manpart.xml
  • ID: sec.yast2.i_y2_part_expert

With the expert partitioner, shown in Figure 9.1, “The YaST Partitioner”, manually modify the partitioning of one or several hard disks. You can add, delete, resize, and edit partitions, or access the soft RAID, and LVM configuration.

Warning
Warning: Repartitioning the Running System

Although it is possible to repartition your system while it is running, the risk of making a mistake that causes data loss is very high. Try to avoid repartitioning your installed system and always do a complete backup of your data before attempting to do so.

The YaST Partitioner
Figure 9.1: The YaST Partitioner

All existing or suggested partitions on all connected hard disks are displayed in the list of Available Storage in the YaST Expert Partitioner dialog. Entire hard disks are listed as devices without numbers, such as /dev/sda. Partitions are listed as parts of these devices, such as /dev/sda1. The size, type, encryption status, file system, and mount point of the hard disks and their partitions are also displayed. The mount point describes where the partition appears in the Linux file system tree.

Several functional views are available on the left hand System View. These views can be used to collect information about existing storage configurations, configure functions (like RAID, Volume Management, Crypt Files), and view file systems with additional features, such as Btrfs, NFS, or TMPFS.

If you run the expert dialog during installation, any free hard disk space is also listed and automatically selected. To provide more disk space to SUSE® Linux Enterprise Desktop, free the needed space starting from the bottom toward the top of the list (starting from the last partition of a hard disk toward the first).

9.1.1 Partition Types

Every hard disk has a partition table with space for four entries. Every entry in the partition table corresponds to a primary partition or an extended partition. Only one extended partition entry is allowed, however.

A primary partition simply consists of a continuous range of cylinders (physical disk areas) assigned to a particular operating system. With primary partitions you would be limited to four partitions per hard disk, because more do not fit in the partition table. This is why extended partitions are used. Extended partitions are also continuous ranges of disk cylinders, but an extended partition may be divided into logical partitions itself. Logical partitions do not require entries in the partition table. In other words, an extended partition is a container for logical partitions.

If you need more than four partitions, create an extended partition as the fourth partition (or earlier). This extended partition should occupy the entire remaining free cylinder range. Then create multiple logical partitions within the extended partition. The maximum number of logical partitions is 63, independent of the disk type. It does not matter which types of partitions are used for Linux. Primary and logical partitions both function normally.

9.1.2 Creating a Partition

To create a partition from scratch select Hard Disks and then a hard disk with free space. The actual modification can be done in the Partitions tab:

  1. Select Add and specify the partition type (primary or extended). Create up to four primary partitions or up to three primary partitions and one extended partition. Within the extended partition, create several logical partitions (see Section 9.1.1, “Partition Types”).

  2. Specify the size of the new partition. You can either choose to occupy all the free unpartitioned space, or enter a custom size.

  3. Select the file system to use and a mount point. YaST suggests a mount point for each partition created. To use a different mount method, like mount by label, select Fstab Options. For more information on supported file systems, see root.

  4. Specify additional file system options if your setup requires them. This is necessary, for example, if you need persistent device names. For details on the available options, refer to Section 9.1.3, “Editing a Partition”.

  5. Click Finish to apply your partitioning setup and leave the partitioning module.

    If you created the partition during installation, you are returned to the installation overview screen.

9.1.2.1 Btrfs Partitioning

The default file system for the root partition is Btrfs (see Chapter 7, System Recovery and Snapshot Management with Snapper for more information on Btrfs). The root file system is the default subvolume and it is not listed in the list of created subvolumes. As a default Btrfs subvolume, it can be mounted as a normal file system.

Important
Important: Btrfs on an Encrypted Root Partition

The default partitioning setup suggests the root partition as Btrfs with /boot being a directory. To encrypt the root partition encrypted, make sure to use the GPT partition table type instead of the default MSDOS type. Otherwise the GRUB2 boot loader may not have enough space for the second stage loader.

It is possible to create snapshots of Btrfs subvolumes—either manually, or automatically based on system events. For example when making changes to the file system, zypper invokes the snapper command to create snapshots before and after the change. This is useful if you are not satisfied with the change zypper made and want to restore the previous state. As snapper invoked by zypper creates snapshots of the root file system by default, it makes sense to exclude specific directories from being included into snapshots. This is the reason why YaST suggests creating the following separate subvolumes.

/boot/grub2/i386-pc, /boot/grub2/x86_64-efi, /boot/grub2/powerpc-ieee1275, /boot/grub2/s390x-emu

A rollback of the boot loader configuration is not supported. The directories listed above are architecture-specific. The first two directories are present on AMD64/Intel 64 machines, the latter two on IBM POWER and on IBM z Systems, respectively.

/home

If /home does not reside on a separate partition, it is excluded to avoid data loss on rollbacks.

/opt, /var/opt

Third-party products usually get installed to /opt. It is excluded to avoid uninstalling these applications on rollbacks.

/srv

Contains data for Web and FTP servers. It is excluded to avoid data loss on rollbacks.

/tmp, /var/tmp, /var/cache, /var/crash

All directories containing temporary files and caches are excluded from snapshots.

/usr/local

This directory is used when manually installing software. It is excluded to avoid uninstalling these installations on rollbacks.

/var/lib/libvirt/images

The default location for virtual machine images managed with libvirt. Excluded to ensure virtual machine images are not replaced with older versions during a rollback. By default, this subvolume is created with the option no copy on write.

/var/lib/mailman, /var/spool

Directories containing mails or mail queues are excluded to avoid a loss of mails after a rollback.

/var/lib/named

Contains zone data for the DNS server. Excluded from snapshots to ensure a name server can operate after a rollback.

/var/lib/mariadb, /var/lib/mysql, /var/lib/pgqsl

These directories contain database data. By default, these subvolumes are created with the option no copy on write.

/var/log

Log file location. Excluded from snapshots to allow log file analysis after the rollback of a broken system.

Tip
Tip: Size of Btrfs Partition

Since saved snapshots require more disk space, it is recommended to reserve enough space for Btrfs. Suggested size for a root Btrfs partition with default subvolumes is 20GB.

9.1.2.1.1 Managing Btrfs Subvolumes using YaST

Subvolumes of a Btrfs partition can be now managed with the YaST Expert partitioner module. You can add new or remove existing subvolumes.

Procedure 9.1: Btrfs Subvolumes with YaST
  1. Start the YaST Expert Partitioner with System › Partitioner.

  2. Choose Btrfs in the left System View pane.

  3. Select the Btrfs partition whose subvolumes you need to manage and click Edit.

  4. Click Subvolume Handling. You can see a list off all existing subvolumes of the selected Btrfs partition. You can notice several @/.snapshots/xyz/snapshot entries—each of these subvolumes belongs to one existing snapshot.

  5. Depending on whether you want to add or remove subvolumes, do the following:

    1. To remove a subvolume, select it from the list of Exisitng Subvolumes and click Remove.

    2. To add a new subvolume, enter its name to the New Subvolume text box and click Add new.

      Btrfs Subvolumes in YaST Partitioner
      Figure 9.2: Btrfs Subvolumes in YaST Partitioner
  6. Confirm with OK and Finish.

  7. Leave the partitioner with Finish.

9.1.3 Editing a Partition

When you create a new partition or modify an existing partition, you can set various parameters. For new partitions, the default parameters set by YaST are usually sufficient and do not require any modification. To edit your partition setup manually, proceed as follows:

  1. Select the partition.

  2. Click Edit to edit the partition and set the parameters:

    File System ID

    Even if you do not want to format the partition at this stage, assign it a file system ID to ensure that the partition is registered correctly. Typical values are Linux, Linux swap, Linux LVM, and Linux RAID.

    File System

    To change the partition file system, click Format Partition and select file system type in the File System list.

    SUSE Linux Enterprise Desktop supports several types of file systems. Btrfs is the Linux file system of choice for the root partition because of its advanced features. It supports copy-on-write functionality, creating snapshots, multi-device spanning, subvolumes, and other useful techniques. XFS, Ext3 and JFS are journaling file systems. These file systems can restore the system very quickly after a system crash, using write processes logged during the operation. Ext2 is not a journaling file system, but it is adequate for smaller partitions because it does not require much disk space for management.

    The default file system for the root partition is Btrfs. The default file system for additional partitions is XFS.

    Swap is a special format that allows the partition to be used as a virtual memory. Create a swap partition of at least 256 MB. However, if you use up your swap space, consider adding more memory to your system instead of adding more swap space.

    Warning
    Warning: Changing the File System

    Changing the file system and reformatting partitions irreversibly deletes all data from the partition.

    For details on the various file systems, refer to Storage Administration Guide.

    Encrypt Device

    If you activate the encryption, all data is written to the hard disk in encrypted form. This increases the security of sensitive data, but reduces the system speed, as the encryption takes some time to process. More information about the encryption of file systems is provided in Chapter 11, Encrypting Partitions and Files.

    Mount Point

    Specify the directory where the partition should be mounted in the file system tree. Select from YaST suggestions or enter any other name.

    Fstab Options

    Specify various parameters contained in the global file system administration file (/etc/fstab). The default settings should suffice for most setups. You can, for example, change the file system identification from the device name to a volume label. In the volume label, use all characters except / and space.

    To get persistent devices names, use the mount option Device ID, UUID or LABEL. In SUSE Linux Enterprise Desktop, persistent device names are enabled by default.

    If you prefer to mount the partition by its label, you need to define one in the Volume label text entry. For example, you could use the partition label HOME for a partition intended to mount to /home.

    If you intend to use quotas on the file system, use the mount option Enable Quota Support. This must be done before you can define quotas for users in the YaST User Management module. For further information on how to configure user quota, refer to Section 13.3.4, “Managing Quotas”.

  3. Select Finish to save the changes.

Note
Note: Resize File Systems

To resize an existing file system, select the partition and use Resize. Note, that it is not possible to resize partitions while mounted. To resize partitions, unmount the relevant partition before running the partitioner.

9.1.4 Expert Options

After you select a hard disk device (like sda) in the System View pane, you can access the Expert menu in the lower right part of the Expert Partitioner window. The menu contains the following commands:

Create New Partition Table

This option helps you create a new partition table on the selected device.

Warning
Warning: Creating a New Partition Table

Creating a new partition table on a device irreversibly removes all the partitions and their data from that device.

Clone This Disk

This option helps you clone the device partition layout (but not the data) to other available disk devices.

9.1.5 Advanced Options

After you select the host name of the computer (the top-level of the tree in the System View pane), you can access the Configure menu in the lower right part of the Expert Partitioner window. The menu contains the following commands:

Configure iSCSI

To access SCSI over IP block devices, you first need to configure iSCSI. This results in additionally available devices in the main partition list.

Configure Multipath

Selecting this option helps you configure the multipath enhancement to the supported mass storage devices.

9.1.6 More Partitioning Tips

The following section includes a few hints and tips on partitioning that should help you make the right decisions when setting up your system.

Tip
Tip: Cylinder Numbers

Note, that different partitioning tools may start counting the cylinders of a partition with 0 or with 1. When calculating the number of cylinders, you should always use the difference between the last and the first cylinder number and add one.

9.1.6.1 Using swap

Swap is used to extend the available physical memory. It is then possible to use more memory than physical RAM available. The memory management system of kernels before 2.4.10 needed swap as a safety measure. Then, if you did not have twice the size of your RAM in swap, the performance of the system suffered. These limitations no longer exist.

Linux uses a page called Least Recently Used (LRU) to select pages that might be moved from memory to disk. Therefore, running applications have more memory available and caching works more smoothly.

If an application tries to allocate the maximum allowed memory, problems with swap can arise. There are three major scenarios to look at:

System with no swap

The application gets the maximum allowed memory. All caches are freed, and thus all other running applications are slowed. After a few minutes, the kernel's out-of-memory kill mechanism activates and kills the process.

System with medium sized swap (128 MB–512 MB)

At first, the system slows like a system without swap. After all physical RAM has been allocated, swap space is used as well. At this point, the system becomes very slow and it becomes impossible to run commands from remote. Depending on the speed of the hard disks that run the swap space, the system stays in this condition for about 10 to 15 minutes until the out-of-memory kill mechanism resolves the issue. Note that you will need a certain amount of swap if the computer needs to perform a suspend to disk. In that case, the swap size should be large enough to contain the necessary data from memory (512 MB–1GB).

System with lots of swap (several GB)

It is better to not have an application that is out of control and swapping excessively in this case. If you use such application, the system will need many hours to recover. In the process, it is likely that other processes get timeouts and faults, leaving the system in an undefined state, even after terminating the faulty process. In this case, do a hard machine reboot and try to get it running again. Lots of swap is only useful if you have an application that relies on this feature. Such applications (like databases or graphics manipulation programs) often have an option to directly use hard disk space for their needs. It is advisable to use this option instead of using lots of swap space.

If your system is not out of control, but needs more swap after some time, it is possible to extend the swap space online. If you prepared a partition for swap space, add this partition with YaST. If you do not have a partition available, you can also use a swap file to extend the swap. Swap files are generally slower than partitions, but compared to physical RAM, both are extremely slow so the actual difference is negligible.

Procedure 9.2: Adding a Swap File Manually

To add a swap file in the running system, proceed as follows:

  1. Create an empty file in your system. For example, if you want to add a swap file with 128 MB swap at /var/lib/swap/swapfile, use the commands:

    mkdir -p /var/lib/swap
    dd if=/dev/zero of=/var/lib/swap/swapfile bs=1M count=128
  2. Initialize this swap file with the command

    mkswap /var/lib/swap/swapfile
    Note
    Note: Changed UUID for Swap Partitions when Formatting via mkswap

    Do not reformat existing swap partitions with mkswap if possible. Reformatting with mkswap will change the UUID value of the swap partition. Either reformat via YaST (will update /etc/fstab) or adjust /etc/fstab manually.

  3. Activate the swap with the command

    swapon /var/lib/swap/swapfile

    To disable this swap file, use the command

    swapoff /var/lib/swap/swapfile
  4. Check the current available swap spaces with the command

    cat /proc/swaps

    Note that at this point, it is only temporary swap space. After the next reboot, it is no longer used.

  5. To enable this swap file permanently, add the following line to /etc/fstab:

    /var/lib/swap/swapfile swap swap defaults 0 0

9.1.7 Partitioning and LVM

From the Expert partitioner, access the LVM configuration by clicking the Volume Management item in the System View pane. However, if a working LVM configuration already exists on your system, it is automatically activated upon entering the initial LVM configuration of a session. In this case, all disks containing a partition (belonging to an activated volume group) cannot be repartitioned. The Linux kernel cannot reread the modified partition table of a hard disk when any partition on this disk is in use. If you already have a working LVM configuration on your system, physical repartitioning should not be necessary. Instead, change the configuration of the logical volumes.

At the beginning of the physical volumes (PVs), information about the volume is written to the partition. To reuse such a partition for other non-LVM purposes, it is advisable to delete the beginning of this volume. For example, in the VG system and PV /dev/sda2, do this with the command dd if=/dev/zero of=/dev/sda2 bs=512 count=1.

Warning
Warning: File System for Booting

The file system used for booting (the root file system or /boot) must not be stored on an LVM logical volume. Instead, store it on a normal physical partition.

To change your /usr or swap, refer to Procedure 11.1, “Updating Init RAM Disk When Switching to Logical Volumes”.

9.2 LVM Configuration

  • Filename: lvm.xml
  • ID: sec.yast2.system.lvm

This section explains specific steps to take when configuring LVM.

Warning
Warning: Back up Your Data

Using LVM is sometimes associated with increased risk such as data loss. Risks also include application crashes, power failures, and faulty commands. Save your data before implementing LVM or reconfiguring volumes. Never work without a backup.

9.2.1 LVM Configuration with YaST

The YaST LVM configuration can be reached from the YaST Expert Partitioner (see Section 9.1, “Using the YaST Partitioner”) within the Volume Management item in the System View pane. The Expert Partitioner allows you to edit and delete existing partitions and create new ones that need to be used with LVM. The first task is to create PVs that provide space to a volume group:

  1. Select a hard disk from Hard Disks.

  2. Change to the Partitions tab.

  3. Click Add and enter the desired size of the PV on this disk.

  4. Use Do not format partition and change the File System ID to 0x8E Linux LVM. Do not mount this partition.

  5. Repeat this procedure until you have defined all the desired physical volumes on the available disks.

9.2.1.1 Creating Volume Groups

If no volume group exists on your system, you must add one (see Figure 9.3, “Creating a Volume Group”). It is possible to create additional groups by clicking Volume Management in the System View pane, and then on Add Volume Group. One single volume group is usually sufficient.

  1. Enter a name for the VG, for example, system.

  2. Select the desired Physical Extend Size. This value defines the size of a physical block in the volume group. All the disk space in a volume group is handled in blocks of this size.

  3. Add the prepared PVs to the VG by selecting the device and clicking Add. Selecting several devices is possible by holding Ctrl while selecting the devices.

  4. Select Finish to make the VG available to further configuration steps.

Creating a Volume Group
Figure 9.3: Creating a Volume Group

If you have multiple volume groups defined and want to add or remove PVs, select the volume group in the Volume Management list and click Resize. In the following window, you can add or remove PVs to the selected volume group.

9.2.1.2 Configuring Logical Volumes

After the volume group has been filled with PVs, define the LVs which the operating system should use in the next dialog. Choose the current volume group and change to the Logical Volumes tab. Add, Edit, Resize, and Delete LVs as needed until all space in the volume group has been occupied. Assign at least one LV to each volume group.

Logical Volume Management
Figure 9.4: Logical Volume Management

Click Add and go through the wizard-like pop-up that opens:

  1. Enter the name of the LV. For a partition that should be mounted to /home, a name like HOME could be used.

  2. Select the type of the LV. It can be either Normal Volume, Thin Pool, or Thin Volume. Note that you need to create a thin pool first, which can store individual thin volumes. The big advantage of thin provisioning is that the total sum of all thin volumes stored in a thin pool can exceed the size of the pool itself.

  3. Select the size and the number of stripes of the LV. If you have only one PV, selecting more than one stripe is not useful.

  4. Choose the file system to use on the LV and the mount point.

By using stripes it is possible to distribute the data stream in the LV among several PVs (striping). However, striping a volume can only be done over different PVs, each providing at least the amount of space of the volume. The maximum number of stripes equals to the number of PVs, where Stripe "1" means "no striping". Striping only makes sense with PVs on different hard disks, otherwise performance will decrease.

Warning
Warning: Striping

YaST cannot, at this point, verify the correctness of your entries concerning striping. Any mistake made here is apparent only later when the LVM is implemented on disk.

If you have already configured LVM on your system, the existing logical volumes can also be used. Before continuing, assign appropriate mount points to these LVs. With Finish, return to the YaST Expert Partitioner and finish your work there.

9.3 Soft RAID Configuration with YaST

  • Filename: raid.xml
  • ID: sec.yast2.system.raid

This section describes actions required to create and configure various types of RAID. .

9.3.1 Soft RAID Configuration with YaST

The YaST RAID configuration can be reached from the YaST Expert Partitioner, described in Section 9.1, “Using the YaST Partitioner”. This partitioning tool enables you to edit and delete existing partitions and create new ones to be used with soft RAID:

  1. Select a hard disk from Hard Disks.

  2. Change to the Partitions tab.

  3. Click Add and enter the desired size of the raid partition on this disk.

  4. Use Do not Format the Partition and change the File System ID to 0xFD Linux RAID. Do not mount this partition.

  5. Repeat this procedure until you have defined all the desired physical volumes on the available disks.

For RAID 0 and RAID 1, at least two partitions are needed—for RAID 1, usually exactly two and no more. If RAID 5 is used, at least three partitions are required, RAID 6 and RAID 10 require at least four partitions. It is recommended to use partitions of the same size only. The RAID partitions should be located on different hard disks to decrease the risk of losing data if one is defective (RAID 1 and 5) and to optimize the performance of RAID 0. After creating all the partitions to use with RAID, click RAID › Add RAID to start the RAID configuration.

In the next dialog, choose between RAID levels 0, 1, 5, 6 and 10. Then, select all partitions with either the Linux RAID or Linux native type that should be used by the RAID system. No swap or DOS partitions are shown.

Tip
Tip: Classify Disks

For RAID types where the order of added disks matters, you can mark individual disks with one of the letters A to E. Click the Classify button, select the disk and click of the Class X buttons, where X is the letter you want to assign to the disk. Assign all available RAID disks this way, and confirm with OK. You can easily sort the classified disks with the Sorted or Interleaved buttons, or add a sort pattern from a text file with Pattern File.

RAID Partitions
Figure 9.5: RAID Partitions

To add a previously unassigned partition to the selected RAID volume, first click the partition then Add. Assign all partitions reserved for RAID. Otherwise, the space on the partition remains unused. After assigning all partitions, click Next to select the available RAID Options.

In this last step, set the file system to use, encryption and the mount point for the RAID volume. After completing the configuration with Finish, see the /dev/md0 device and others indicated with RAID in the expert partitioner.

9.3.2 Troubleshooting

Check the file /proc/mdstat to find out whether a RAID partition has been damaged. If Th system fails, shut down your Linux system and replace the defective hard disk with a new one partitioned the same way. Then restart your system and enter the command mdadm /dev/mdX --add /dev/sdX. Replace 'X' with your particular device identifiers. This integrates the hard disk automatically into the RAID system and fully reconstructs it.

Note that although you can access all data during the rebuild, you may encounter some performance issues until the RAID has been fully rebuilt.

9.3.3 For More Information

Configuration instructions and more details for soft RAID can be found in the HOWTOs at:

Linux RAID mailing lists are available, such as http://marc.info/?l=linux-raid.

10 Installing or Removing Software

  • Filename: yast2_sw.xml
  • ID: cha.y2.sw
Abstract

Use YaST's software management module to search for software components you want to add or remove. YaST resolves all dependencies for you. To install packages not shipped with the installation media, add additional software repositories to your setup and let YaST manage them. Keep your system up-to-date by managing software updates with the update applet.

Change the software collection of your system with the YaST Software Manager. This YaST module is available in two flavors: a graphical variant for X Window and a text-based variant to be used on the command line. The graphical flavor is described here—for details on the text-based YaST, see Chapter 5, YaST in Text Mode.

Note
Note: Confirmation and Review of Changes

When installing, updating or removing packages, any changes in the Software Manager are first applied after clicking Accept or Apply. YaST maintains a list with all actions, allowing you to review and modify your changes before applying them to the system.

10.1 Definition of Terms

Repository

A local or remote directory containing packages, plus additional information about these packages (package metadata).

(Repository) Alias/Repository Name

A short name for a repository (called Alias within Zypper and Repository Name within YaST). It can be chosen by the user when adding a repository and must be unique.

Repository Description Files

Each repository provides files describing content of the repository (package names, versions, etc.). These repository description files are downloaded to a local cache that is used by YaST.

Product

Represents a whole product, for example SUSE® Linux Enterprise Desktop.

Pattern

A pattern is an installable group of packages dedicated to a certain purpose. For example, the Laptop pattern contains all packages that are needed in a mobile computing environment. Patterns define package dependencies (such as required or recommended packages) and come with a preselection of packages marked for installation. This ensures that the most important packages needed for a certain purpose are available on your system after installation of the pattern. If necessary, you can manually select or deselect packages within a pattern.

Package

A package is a compressed file in rpm format that contains the files for a particular program.

Patch

A patch consists of one or more packages and may be applied by means of delta RPMs. It may also introduce dependencies to packages that are not installed yet.

Resolvable

A generic term for product, pattern, package or patch. The most commonly used type of resolvable is a package or a patch.

Delta RPM

A delta RPM consists only of the binary diff between two defined versions of a package, and therefore has the smallest download size. Before being installed, the full RPM package is rebuilt on the local machine.

Package Dependencies

Certain packages are dependent on other packages, such as shared libraries. In other terms, a package may require other packages—if the required packages are not available, the package cannot be installed. In addition to dependencies (package requirements) that must be fulfilled, some packages recommend other packages. These recommended packages are only installed if they are actually available, otherwise they are ignored and the package recommending them is installed nevertheless.

10.2 Registering Installed System

If you have skipped the registration during the installation or want to re-register your system, you can register the system at any time using the YaST module Product Registration or the command line tool SUSEConnect.

10.2.1 Registering with YaST

To register the system start YaST and to to Software, then Product Registration.

By default the system is registered with the SUSE Customer Center. If your organization provides local registration servers, you can either choose one form the list of auto-detected servers or provide the URL manually.

10.2.2 Registering with SUSEConnect

To register from the command line, use the command

tux > sudo SUSEConnect -r REGISTRATION_CODE -e EMAIL_ADDRESS

Replace REGISTRATION_CODE with the registration code you received with your copy of SUSE Linux Enterprise Server. Replace EMAIL_ADDRESS with the E-mail address associated with the SUSE account you or your organization uses to manage subscriptions.

To register with a local registration server, also provide the URL to the server:

tux > sudo SUSEConnect -r REGISTRATION_CODE -e EMAIL_ADDRESS --url "URL"

10.3 Using the YaST Software Manager

Start the software manager from the YaST Control Center by choosing Software › Software Management.

10.3.1 Views for Searching Packages or Patterns

The YaST software manager can install packages or patterns from all currently enabled repositories. It offers different views and filters to make it easier to find the software you are searching for. The Search view is the default view of the window. To change view, click View and select one of the following entries from the drop-down box. The selected view opens in a new tab.

Patterns

Lists all patterns available for installation on your system.

Package Groups

Lists all packages sorted by groups such as Graphics, Programming, or Security.

RPM Groups

Lists all packages sorted by functionality with groups and subgroups. For example Networking › Email › Clients.

Languages

A filter to list all packages needed to add a new system language.

Repositories

A filter to list packages by repository. To select more than one repository, hold the Ctrl key while clicking repository names. The pseudo repository @System lists all packages currently installed.

Search

Lets you search for a package according to certain criteria. Enter a search term and press Enter. Refine your search by specifying where to Search In and by changing the Search Mode. For example, if you do not know the package name but only the name of the application that you are searching for, try including the package Description in the search process.

Installation Summary

If you have already selected packages for installation, update or removal, this view shows the changes that will be applied to your system when you click Accept. To filter for packages with a certain status in this view, activate or deactivate the respective check boxes. Press ShiftF1 for details on the status flags.

Tip
Tip: Finding Packages Not Belonging to an Active Repository

To list all packages that do not belong to an active repository, choose View › Repositories › @System and then choose Secondary Filter › Unmaintained Packages. This is useful, for example, if you have deleted a repository and want to make sure no packages from that repository remain installed.

10.3.2 Installing and Removing Packages or Patterns

Certain packages are dependent on other packages, such as shared libraries. On the other hand, some packages cannot coexist with others on the system. If possible, YaST automatically resolves these dependencies or conflicts. If your choice results in a dependency conflict that cannot be automatically solved, you need to solve it manually as described in Section 10.3.4, “Checking Software Dependencies”.

Note
Note: Removal of Packages

When removing any packages, by default YaST only removes the selected packages. If you want YaST to also remove any other packages that become unneeded after removal of the specified package, select Options › Cleanup when deleting packages from the main menu.

  1. Search for packages as described in Section 10.3.1, “Views for Searching Packages or Patterns”.

  2. The packages found are listed in the right pane. To install a package or remove it, right-click it and choose Install or Delete. If the relevant option is not available, check the package status indicated by the symbol in front of the package name—press ShiftF1 for help.

    Tip
    Tip: Applying an Action to All Packages Listed

    To apply an action to all packages listed in the right pane, go to the main menu and choose an action from Package › All in This List.

  3. To install a pattern, right-click the pattern name and choose Install.

  4. It is not possible to remove a pattern per se. Instead, select the packages of a pattern you want to remove and mark them for removal.

  5. To select more packages, repeat the steps mentioned above.

  6. Before applying your changes, you can review or modify them by clicking View › Installation Summary. By default, all packages that will change status, are listed.

  7. To revert the status for a package, right-click the package and select one of the following entries: Keep if the package was scheduled to be deleted or updated, or Do Not Install if it was scheduled for installation. To abandon all changes and quit the Software Manager, click Cancel and Abandon.

  8. When you are finished, click Accept to apply your changes.

  9. In case YaST found dependencies on other packages, a list of packages that have additionally been chosen for installation, update or removal is presented. Click Continue to accept them.

    After all selected packages are installed, updated or removed, the YaST Software Manager automatically terminates.

Note
Note: Installing Source Packages

Installing source packages with YaST Software Manager is not possible at the moment. Use the command line tool zypper for this purpose. For more information, see Section 6.1.2.5, “Installing or Downloading Source Packages”.

10.3.3 Updating Packages

Instead of updating individual packages, you can also update all installed packages or all packages from a certain repository. When mass updating packages, the following aspects are generally considered:

  • priorities of the repositories that provide the package,

  • architecture of the package (for example, AMD64/Intel 64),

  • version number of the package,

  • package vendor.

Which of the aspects has the highest importance for choosing the update candidates depends on the respective update option you choose.

  1. To update all installed packages to the latest version, choose Package › All Packages › Update if Newer Version Available from the main menu.

    All repositories are checked for possible update candidates, using the following policy: YaST first tries to restrict the search to packages with the same architecture and vendor like the installed one. If the search is positive, the best update candidate from those is selected according to the process below. However, if no comparable package of the same vendor can be found, the search is expanded to all packages with the same architecture. If still no comparable package can be found, all packages are considered and the best update candidate is selected according to the following criteria:

    1. Repository priority: Prefer the package from the repository with the highest priority.

    2. If more than one package results from this selection, choose the one with the best architecture (best choice: matching the architecture of the installed one).

    If the resulting package has a higher version number than the installed one, the installed package will be updated and replaced with the selected update candidate.

    This option tries to avoid changes in architecture and vendor for the installed packages, but under certain circumstances, they are tolerated.

    Note
    Note: Update Unconditionally

    If you choose Package › All Packages › Update Unconditionally instead, the same criteria apply but any candidate package found is installed unconditionally. Thus, choosing this option might actually lead to downgrading some packages.

  2. To make sure that the packages for a mass update derive from a certain repository:

    1. Choose the repository from which to update as described in Section 10.3.1, “Views for Searching Packages or Patterns” .

    2. On the right hand side of the window, click Switch system packages to the versions in this repository. This explicitly allows YaST to change the package vendor when replacing the packages.

      When you proceed with Accept, all installed packages will be replaced by packages deriving from this repository, if available. This may lead to changes in vendor and architecture and even to downgrading some packages.

    3. To refrain from this, click Cancel switching system packages to the versions in this repository. Note that you can only cancel this until you click the Accept button.

  3. Before applying your changes, you can review or modify them by clicking View › Installation Summary. By default, all packages that will change status, are listed.

  4. If all options are set according to your wishes, confirm your changes with Accept to start the mass update.

10.3.4 Checking Software Dependencies

Most packages are dependent on other packages. If a package, for example, uses a shared library, it is dependent on the package providing this library. On the other hand, some packages cannot coexist, causing a conflict (for example, you can only install one mail transfer agent: sendmail or postfix). When installing or removing software, the Software Manager makes sure no dependencies or conflicts remain unsolved to ensure system integrity.

In case there exists only one solution to resolve a dependency or a conflict, it is resolved automatically. Multiple solutions always cause a conflict which needs to be resolved manually. If solving a conflict involves a vendor or architecture change, it also needs to be solved manually. When clicking Accept to apply any changes in the Software Manager, you get an overview of all actions triggered by the automatic resolver which you need to confirm.

By default, dependencies are automatically checked. A check is performed every time you change a package status (for example, by marking a package for installation or removal). This is generally useful, but can become exhausting when manually resolving a dependency conflict. To disable this function, go to the main menu and deactivate Dependencies › Autocheck. Manually perform a dependency check with Dependencies › Check Now. A consistency check is always performed when you confirm your selection with Accept.

To review a package's dependencies, right-click it and choose Show Solver Information. A map showing the dependencies opens. Packages that are already installed are displayed in a green frame.

Note
Note: Manually Solving Package Conflicts

Unless you are very experienced, follow the suggestions YaST makes when handling package conflicts, otherwise you may not be able to resolve them. Keep in mind that every change you make, potentially triggers other conflicts, so you can easily end up with a steadily increasing number of conflicts. In case this happens, Cancel the Software Manager, Abandon all your changes and start again.

Conflict Management of the Software Manager
Figure 10.1: Conflict Management of the Software Manager

10.3.4.1 Handling of Package Recommendations

In addition to the hard dependencies required to run a program (for example a certain library), a package can also have weak dependencies, that add for example extra functionality or translations. These weak dependencies are called package recommendations.

The way package recommendations are handled has slightly changed starting with SUSE Linux Enterprise Desktop 12 SP1. Nothing has changed when installing a new package—recommended packages are still installed by default.

Prior to SUSE Linux Enterprise Desktop 12 SP1, missing recommendations for already installed packages were installed automatically. Now these packages will no longer be installed automatically. To switch to the old default, set PKGMGR_REEVALUATE_RECOMMENDED="yes" in /etc/sysconfig/yast2. To install all missing recommendations for already installed packages, start YaST › Software Manager and choose Extras › Install All Matching Recommended Packages.

To disable the installation of recommended packages when installing new packages, deactivate Dependencies › Install Recommended Packages in the YaST Software Manager. If using the command line tool Zypper to install packages, use the option --no-recommends.

10.4 Managing Software Repositories and Services

  • Filename: yast2_sw_repo.xml
  • ID: sec.y2.sw.instsource

To install third-party software, add additional software repositories to your system. By default, the product repositories such as SUSE Linux Enterprise Desktop-DVD 12 SP3 and a matching update repository are automatically configured after you have registered your system. For more information about registration, see Section 3.7, “SUSE Customer Center Registration” or Section 17.9, “Registering Your System”. Depending on the initially selected product, an additional repository containing translations, dictionaries, etc. might also be configured.

To manage repositories, start YaST and select Software › Software Repositories. The Configured Software Repositories dialog opens. Here, you can also manage subscriptions to so-called Services by changing the View at the right corner of the dialog to All Services. A Service in this context is a Repository Index Service (RIS) that can offer one or more software repositories. Such a Service can be changed dynamically by its administrator or vendor.

Each repository provides files describing content of the repository (package names, versions, etc.). These repository description files are downloaded to a local cache that is used by YaST. To ensure their integrity, software repositories can be signed with the GPG Key of the repository maintainer. Whenever you add a new repository, YaST offers the ability to import its key.

Warning
Warning: Trusting External Software Sources

Before adding external software repositories to your list of repositories, make sure this repository can be trusted. SUSE is not responsible for any problems arising from software installed from third-party software repositories.

10.4.1 Adding Software Repositories

You can either add repositories from DVD/CD, removable mass storage devices (such as flash disks), a local directory, an ISO image or a network source.

To add repositories from the Configured Software Repositories dialog in YaST proceed as follows:

  1. Click Add.

  2. Select one of the options listed in the dialog:

    Adding a Software Repository
    Figure 10.2: Adding a Software Repository
    • To scan your network for installation servers announcing their services via SLP, select Scan Using SLP and click Next.

    • To add a repository from a removable medium, choose the relevant option and insert the medium or connect the USB device to the machine, respectively. Click Next to start the installation.

    • For the majority of repositories, you will be asked to specify the path (or URL) to the media after selecting the respective option and clicking Next. Specifying a Repository Name is optional. If none is specified, YaST will use the product name or the URL as repository name.

    The option Download Repository Description Files is activated by default. If you deactivate the option, YaST will automatically download the files later, if needed.

  3. Depending on the repository you have added, you may be prompted to import the repository's GPG key or asked to agree to a license.

    After confirming these messages, YaST will download and parse the metadata. It will add the repository to the list of Configured Repositories.

  4. If needed, adjust the repository Properties as described in Section 10.4.2, “Managing Repository Properties”.

  5. Confirm your changes with OK to close the configuration dialog.

  6. After having successfully added the repository, the software manager starts and you can install packages from this repository. For details, refer to Chapter 10, Installing or Removing Software.

10.4.2 Managing Repository Properties

The Configured Software Repositories overview of the Software Repositories lets you change the following repository properties:

Status

The repository status can either be Enabled or Disabled. You can only install packages from repositories that are enabled. To turn a repository off temporarily, select it and deactivate Enable. You can also double-click a repository name to toggle its status. To remove a repository completely, click Delete.

Refresh

When refreshing a repository, its content description (package names, versions, etc.) is downloaded to a local cache that is used by YaST. It is sufficient to do this once for static repositories such as CDs or DVDs, whereas repositories whose content changes often should be refreshed frequently. The easiest way to keep a repository's cache up-to-date is to choose Automatically Refresh. To do a manual refresh click Refresh and select one of the options.

Keep Downloaded Packages

Packages from remote repositories are downloaded before being installed. By default, they are deleted upon a successful installation. Activating Keep Downloaded Packages prevents the deletion of downloaded packages. The download location is configured in /etc/zypp/zypp.conf, by default it is /var/cache/zypp/packages.

Priority

The Priority of a repository is a value between 1 and 200, with 1 being the highest priority and 200 the lowest priority. Any new repositories that are added with YaST get a priority of 99 by default. If you do not care about a priority value for a certain repository, you can also set the value to 0 to apply the default priority to that repository (99). If a package is available in more than one repository, then the repository with the highest priority takes precedence. This is useful if you want to avoid downloading packages unnecessarily from the Internet by giving a local repository (for example, a DVD) a higher priority.

Important
Important: Priority Compared to Version

The repository with the highest priority takes precedence in any case. Therefore, make sure that the update repository always has the highest priority, otherwise you might install an outdated version that will not be updated until the next online update.

Name and URL

To change a repository name or its URL, select it from the list with a single-click and then click Edit.

10.4.3 Managing Repository Keys

To ensure their integrity, software repositories can be signed with the GPG Key of the repository maintainer. Whenever you add a new repository, YaST offers to import its key. Verify it as you would do with any other GPG key and make sure it does not change. If you detect a key change, something might be wrong with the repository. Disable the repository as an installation source until you know the cause of the key change.

To manage all imported keys, click GPG Keys in the Configured Software Repositories dialog. Select an entry with the mouse to show the key properties at the bottom of the window. Add, Edit or Delete keys with a click on the respective buttons.

10.5 Keeping the System Up-to-date

  • Filename: updater_gnome.xml
  • ID: sec.updater

SUSE offers a continuous stream of software security patches for your product. They can be installed using the YaST Online Update module. It also offers advanced features to customize the patch installation.

The GNOME desktop also provides a tool for installing patches and for installing package updates of packages that are already installed. In contrast to a Patch, a package update is only related to one package and provides a newer version of a package. The GNOME tool lets you install both patches and package updates with a few clicks as described in Section 10.5.2, “Installing Patches and Package Updates”.

10.5.1 The GNOME Software Updater

Whenever new patches or package updates are available, GNOME shows a notification about this at the bottom of the desktop (or on the locked screen).

Update Notification on GNOME Lock Screen
Figure 10.3: Update Notification on GNOME Lock Screen

10.5.2 Installing Patches and Package Updates

Whenever new patches or package updates are available, GNOME shows a notification about this at the bottom of the desktop (or on the locked screen).

Update Notification on GNOME Desktop
Figure 10.4: Update Notification on GNOME Desktop
  1. To install the patches and updates, click Install updates in the notification message. This opens the GNOME update viewer. Alternatively, open the update viewer from Applications › System Tools › Software Update or press AltF2 and enter gpk-update-viewer.

  2. All Security Updates and Important Updates are preselected. It is strongly recommended to install these patches. Other Updates can be manually selected by activating the respective check boxes. Get detailed information on a patch or package update by clicking its title.

  3. Click Install Updates to start the installation. You will be prompted for the root password.

  4. Enter the root password in the authentication dialog and proceed.

GNOME Update Viewer
Figure 10.5: GNOME Update Viewer

10.5.3 Configuring the GNOME Software Updater

To configure notifications, select Applications › System Settings › Notification › Software Update and adjust the desired settings.

To configure how often to check for updates or to activate or deactivate repositories, select Applications › System Tools › Settings › Software Settings. The tabs of the configuration dialog let you modify the following settings:

Update Settings
Check for Updates

Choose how often a check for updates is performed: Hourly, Daily, Weekly, or Never.

Check for Major Upgrades

Choose how often a check for major upgrades is performed: Daily, Weekly, or Never.

Check for updates when using mobile broadband

This configuration option is only available on mobile computers. Turned off by default.

Check for updates on battery power

This configuration option is only available on mobile computers. Turned off by default.

Software Sources
Repositories

Lists the repositories that will be checked for available patches and package updates. You can enable or disable certain repositories.

Important
Important: Keep Update Repository Enabled

To make sure that you are notified about any patches that are security-relevant, keep the Updates repository for your product enabled.

More options are configurable using gconf-editor: apps › gnome-packagekit.

11 Installing Modules, Extensions, and Third Party Add-On Products

  • Filename: yast2_sw_addon.xml
  • ID: cha.add-ons
Abstract

Modules and extensions add parts or functionality to the system. Modules are fully supported parts of SUSE Linux Enterprise Desktop with a different life cycle and update timeline. They are a set of packages, have a clearly defined scope and are delivered via online channel only.

Extensions, such as the Workstation Extension or the High Availability Extension, add extra functionality to the system and require an own registration key that is liable for costs. Extensions are delivered via online channel or physical media. Registering at the SUSE Customer Center or a local registration server is a prerequisite for subscribing to the online channels. The Package Hub (Section 11.5, “SUSE Package Hub”) and SUSE Software Development Kit (Section 11.4, “SUSE Software Development Kit (SDK) 12 SP3) extensions are exceptions which do not require a registration key and are not covered by SUSE support agreements.

A list of modules and extensions for your product is available after having registered your system at SUSE Customer Center or a local registration server. If you skipped the registration step during the installation, you can register your system at any time using the SUSE Customer Center Configuration module in YaST. For details, refer to Section 17.9, “Registering Your System”.

Some add-on products are also provided by third parties, for example, binary-only drivers that are needed by certain hardware to function properly. If you have such hardware, refer to the release notes for more information about availability of binary drivers for your system. The release notes are available from http://www.suse.com/releasenotes/, from YaST or from /usr/share/doc/release-notes/ in your installed system.

11.1 List of Optional Modules

Besides the base server operating system, SUSE Linux Enterprise Desktop 12 provides optional modules included in the subscription. Each module has a different lifecycle. This approach offers faster integration with upstream updates. Following is a list of all optional modules together with brief descriptions:

Web ans Scripting Module

The Web ans Scripting Module delivers a comprehensive set of scripting languages, frameworks and related tools to help developers and system administrators accelerate the creation of stable, modern web applications. The module includes recent versions of dynamic languages, such as PHP and Python. If you intend to run a web server or hosts applications that have web portals or require server-side scripts, you the Web and Scripting Module is a must.

Legacy Module

The Legacy Module helps you migrate applications from older systems to SUSE Linux Enterprise Desktop 12. For organizations moving from UNIX to Linux, this module may be essential. Many older applications require packages that are no longer available with the latest SUSE Linux Enterprise Desktop version. This module provides those packages. It includes packages such as sendmail, syslog-ng, IBM Java6 and a number of libraries (for example, openssl-0.9.8).

Public Cloud Module

The Public Cloud Module is a collection of tools to create and manage public cloud images from the command line. When building your own images with KIWI or SUSE Studio, initialization code specific to the target cloud is included in that image.

The Public Cloud Module contains four patterns:

  • Amazon–Web–Services (aws–cli, cloud–init)

  • Microsoft–Azure (WALinuxAgent)

  • Google–Cloud–Platform (gcimagebundle, google–api–python–client, google–cloud–sdk, google–daemon, google–startup–scripts)

  • OpenStack (OpenStack–heat–cfntools, cloud–init)

Toolchain Module

This module offers software developers a current toolchain consisting of GNU Compiler Collection (GCC) and related packages as well as updated applications, improvements, new standards and additional hardware features. It allows software developers to take benefit of new features of the most recent GCC release and brings improvements in language support, like for most C++14 changes and more Fortran 2008 and 2015 support, as well as many new optimizations. For more details, see https://gcc.gnu.org/gcc-5/changes.html.

Advanced Systems Management Module

This module contains three components to support system administrators in automating tasks in the data center and cloud: the configuration management tools 'CFEngine' and 'puppet', and the new "machinery" infrastructure. Machinery is a systems management toolbox that allows you to inspect systems remotely, store their system descriptions, and create new system images to deploy in data centers and clouds.

For more information about the Machinery project, see http://machinery-project.org/

Containers Module

This Module contains several packages revolving around containers and related tools, including the open source project Docker and prepackaged images for SUSE Linux Enterprise Server 11 and SUSE Linux Enterprise Server 12.

HPC Module

The HPC module provides a selected set of tools and components used in High Performance Computing environments. To fulfill changing customer needs for leading edge HPC support on both hardware and software, this module provides software components frequently updated to the latest versions available. The selection of software components has been inspired by (but not limited to) what is provided by the OpenHPC community project at http://openhpc.community/.

11.2 Installing Modules and Extensions from Online Channels

Tip
Tip: SUSE Linux Enterprise Desktop

As of SUSE Linux Enterprise 12, SUSE Linux Enterprise Desktop is not only available as a separate product, but also as a Workstation Extension for SUSE Linux Enterprise Server. If you register at the SUSE Customer Center, the Workstation Extension can be selected for installation. Note that installing it requires a valid registration key.

The following procedure requires that you have registered your system with SUSE Customer Center, or a local registration server. When registering your system, you will see a list of extensions and modules immediately after having completed Step 4 of Section 17.9, “Registering Your System”. In that case, skip the next steps and proceed with Step 2.

Note
Note: Viewing Already Installed Add-Ons

To view already installed add-ons, start YaST and select Software › Add-Ons

Procedure 11.1: Installing Add-ons and Extensions from Online Channels with YaST
  1. Start YaST and select Software › Add System Extensions or Modules.

    YaST connects to the registration server and displays a list of Available Extensions and Modules.

    Note
    Note: Available Extensions and Modules

    The amount of available extensions and modules depends on the registration server. A local registration server may only offer update repositories and no additional extensions.

    Note
    Note: Module Life Cycles

    Life cycle end dates of modules are available at https://scc.suse.com/docs/lifecycle/sle/12/modules.

  2. Click an entry to see its description.

  3. Select one or multiple entries for installation by activating their check marks.

    Installation of System Extensions
    Figure 11.1: Installation of System Extensions
  4. Click Next to proceed.

  5. Depending on the repositories to be added for the extension or module, you may be prompted to import the repository's GPG key or asked to agree to a license.

    After confirming these messages, YaST will download and parse the metadata. The repositories for the selected extensions will be added to your system—no additional installation sources are required.

  6. If needed, adjust the repository Properties as described in Section 10.4.2, “Managing Repository Properties”.

Note
Note: For More Information

White paper SUSE Linux Enterprise Server 12 Modules.

11.3 Installing Extensions and Third Party Add-On Products from Media

When installing an extension or add-on product from media, you can select various types of product media, like DVD/CD, removable mass storage devices (such as flash disks), or a local directory or ISO image. The media can also be provided by a network server, for example, via HTTP, FTP, NFS, or Samba.

  1. Start YaST and select Software › Add-On Products. Alternatively, start the YaST Add-On Products module from the command line with sudo yast2 add-on.

    The dialog will show an overview of already installed add-on products, modules and extensions.

    List of Installed Add-on Products, Modules and Extensions
    Figure 11.2: List of Installed Add-on Products, Modules and Extensions
  2. Choose Add to install a new add-on product.

  3. In the Add-On Product dialog, select the option that matches the type of medium from which you want to install:

    Installation of an Add-on Product or an Extension
    Figure 11.3: Installation of an Add-on Product or an Extension
    • To scan your network for installation servers announcing their services via SLP, select Scan Using SLP and click Next.

    • To add a repository from a removable medium, choose the relevant option and insert the medium or connect the USB device to the machine, respectively. Click Next to start the installation.

    • For most media types, you will prompted to specify the path (or URL) to the media after selecting the respective option and clicking Next. Specifying a Repository Name is optional. If none is specified, YaST will use the product name or the URL as the repository name.

    The option Download Repository Description Files is activated by default. If you deactivate the option, YaST will automatically download the files later, if needed.

  4. Depending on the repository you have added, you may be prompted to import the repository's GPG key or asked to agree to a license.

    After confirming these messages, YaST will download and parse the metadata. It will add the repository to the list of Configured Repositories.

  5. If needed, adjust the repository Properties as described in Section 10.4.2, “Managing Repository Properties”.

  6. Confirm your changes with OK to close the configuration dialog.

  7. After having successfully added the repository for the add-on media, the software manager starts and you can install packages. For details, refer to Chapter 10, Installing or Removing Software.

11.4 SUSE Software Development Kit (SDK) 12 SP3

SUSE Software Development Kit 12 SP3 is an extension for SUSE Linux Enterprise 12 SP3. It is a complete tool kit for application development. In fact, to provide a comprehensive build system, SUSE Software Development Kit 12 SP3 includes all the open source tools that were used to build the SUSE Linux Enterprise Server product. It provides you as a developer, independent software vendor (ISV), or independent hardware vendor (IHV) with all the tools needed to port applications to all the platforms supported by SUSE Linux Enterprise Desktop and SUSE Linux Enterprise Server.

The SUSE Software Development Kit does not require a registration key and is not covered by SUSE support agreements.

SUSE Software Development Kit also contains integrated development environments (IDEs), debuggers, code editors, and other related tools. It supports most major programming languages, including C, C++, Java, and most scripting languages. For your convenience, SUSE Software Development Kit includes multiple Perl packages that are not included in SUSE Linux Enterprise.

The SDK extension is available via an online channel from the SUSE Customer Center. Alternatively, go to http://download.suse.com/, search for SUSE Linux Enterprise Software Development Kit and download it from there. Refer to Chapter 11, Installing Modules, Extensions, and Third Party Add-On Products for details.

11.5 SUSE Package Hub

In the list of Available Extensions and Modules you find the SUSE Package Hub. It is available without any additional fee. It provides a large set of additional community packages for SUSE Linux Enterprise that can easily be installed but are not supported by SUSE.

More information about SUSE Package Hub and how to contribute is available at https://packagehub.suse.com/

Important
Important: SUSE Package Hub is Not Supported

Be aware that packages provided in the SUSE Package Hub are not officially supported by SUSE. SUSE only provides support for enabling the Package Hub repository and help with installation or deployment of the RPM packages.

12 Installing Multiple Kernel Versions

  • Filename: tuning_multikernel.xml
  • ID: cha.tuning.multikernel
Abstract

SUSE Linux Enterprise Desktop supports the parallel installation of multiple kernel versions. When installing a second kernel, a boot entry and an initrd are automatically created, so no further manual configuration is needed. When rebooting the machine, the newly added kernel is available as an additional boot option.

Using this functionality, you can safely test kernel updates while being able to always fall back to the proven former kernel. To do this, do not use the update tools (such as the YaST Online Update or the updater applet), but instead follow the process described in this chapter.

Warning
Warning: Support Entitlement

Be aware that you lose your entire support entitlement for the machine when installing a self-compiled or a third-party kernel. Only kernels shipped with SUSE Linux Enterprise Desktop and kernels delivered via the official update channels for SUSE Linux Enterprise Desktop are supported.

Tip
Tip: Check Your Boot Loader Configuration Kernel

It is recommended to check your boot loader configuration after having installed another kernel to set the default boot entry of your choice. See Section 13.3, “Configuring the Boot Loader with YaST” for more information.

12.1 Enabling and Configuring Multiversion Support

Installing multiple versions of a software package (multiversion support) is enabled by default since SUSE Linux Enterprise Server 12. To verify this setting, proceed as follows:

  1. Open /etc/zypp/zypp.conf with the editor of your choice as root.

  2. Search for the string multiversion. If multiversion is enabled for all kernel packages capable of this feature, the following line appears uncommented:

    multiversion = provides:multiversion(kernel)
  3. To restrict multiversion support to certain kernel flavors, add the package names as a comma-separated list to the multiversion option in /etc/zypp/zypp.conf—for example

    multiversion = kernel-default,kernel-default-base,kernel-source
  4. Save your changes.

Warning
Warning: Kernel Module Packages (KMP)

Make sure that required vendor provided kernel modules (Kernel Module Packages) are also installed for the new updated kernel. The kernel update process will not warn about eventually missing kernel modules because package requirements are still fulfilled by the old kernel that is kept on the system.

12.1.1 Automatically Deleting Unused Kernels

When frequently testing new kernels with multiversion support enabled, the boot menu quickly becomes confusing. Since a /boot partition usually has limited space you also might run into trouble with /boot overflowing. While you can delete unused kernel versions manually with YaST or Zypper (as described below), you can also configure libzypp to automatically delete kernels no longer used. By default no kernels are deleted.

  1. Open /etc/zypp/zypp.conf with the editor of your choice as root.

  2. Search for the string multiversion.kernels and activate this option by uncommenting the line. This option takes a comma-separated list of the following values:

    3.12.24-7.1 keep the kernel with the specified version number

    latest keep the kernel with the highest version number

    latest-N keep the kernel with the Nth highest version number

    running keep the running kernel

    oldest keep the kernel with the lowest version number (the one that was originally shipped with SUSE Linux Enterprise Desktop)

    oldest+N keep the kernel with the Nth lowest version number

    Here are some examples

    multiversion.kernels = latest,running

    Keep the latest kernel and the one currently running. This is similar to not enabling the multiversion feature, except that the old kernel is removed after the next reboot and not immediately after the installation.

    multiversion.kernels = latest,latest-1,running

    Keep the last two kernels and the one currently running.

    multiversion.kernels = latest,running,3.12.25.rc7-test

    Keep the latest kernel, the one currently running, and 3.12.25.rc7-test.

    Tip
    Tip: Keep the Running Kernel

    Unless you are using a special setup, always keep the kernel marked running.

    If you do not keep the running kernel, it will be deleted when updating the kernel. In turn, this means that all of the running kernel's modules are also deleted and cannot be loaded anymore.

    If you decide not to keep the running kernel, always reboot immediately after a kernel upgrade to avoid issues with modules.

12.1.2 Use Case: Deleting an Old Kernel after Reboot Only

You want to make sure that an old kernel will only be deleted after the system has rebooted successfully with the new kernel.

Change the following line in /etc/zypp/zypp.conf:

multiversion.kernels = latest,running

The previous parameters tell the system to keep the latest kernel and the running one only if they differ.

12.1.3 Use Case: Keeping Older Kernels as Fallback

You want to keep one or more kernel versions to have one or more spare kernels.

This can be useful if you need kernels for testing. If something goes wrong (for example, your machine does not boot), you still can use one or more kernel versions which are known to be good.

Change the following line in /etc/zypp/zypp.conf:

multiversion.kernels = latest,latest-1,latest-2,running

When you reboot your system after the installation of a new kernel, the system will keep three kernels: the current kernel (configured as latest,running) and its two immediate predecessors (configured as latest-1 and latest-2).

12.1.4 Use Case: Keeping a Specific Kernel Version

You make regular system updates and install new kernel versions. However, you are also compiling your own kernel version and want to make sure that the system will keep them.

Change the following line in /etc/zypp/zypp.conf:

multiversion.kernels = latest,3.12.28-4.20,running

When you reboot your system after the installation of a new kernel, the system will keep two kernels: the new and running kernel (configured as latest,running) and your self-compiled kernel (configured as 3.12.28-4.20).

12.2 Installing/Removing Multiple Kernel Versions with YaST

  1. Start YaST and open the software manager via Software › Software Management.

  2. List all packages capable of providing multiple versions by choosing View › Package Groups › Multiversion Packages.

    The YaST Software Manager: Multiversion View
    Figure 12.1: The YaST Software Manager: Multiversion View
  3. Select a package and open its Version tab in the bottom pane on the left.

  4. To install a package, click the check box next to it. A green check mark indicates it is selected for installation.

    To remove an already installed package (marked with a white check mark), click the check box next to it until a red X indicates it is selected for removal.

  5. Click Accept to start the installation.

12.3 Installing/Removing Multiple Kernel Versions with Zypper

  1. Use the command zypper se -s 'kernel*' to display a list of all kernel packages available:

    S | Name           | Type       | Version         | Arch   | Repository
    --+----------------+------------+-----------------+--------+-------------------
    v | kernel-default | package    | 2.6.32.10-0.4.1 | x86_64 | Alternative Kernel
    i | kernel-default | package    | 2.6.32.9-0.5.1  | x86_64 | (System Packages)
      | kernel-default | srcpackage | 2.6.32.10-0.4.1 | noarch | Alternative Kernel
    i | kernel-default | package    | 2.6.32.9-0.5.1  | x86_64 | (System Packages)
    ...
  2. Specify the exact version when installing:

    zypper in kernel-default-2.6.32.10-0.4.1
  3. When uninstalling a kernel, use the commands zypper se -si 'kernel*' to list all kernels installed and zypper rm PACKAGENAME-VERSION to remove the package.

13 Managing Users with YaST

  • Filename: yast2_userman.xml
  • ID: cha.y2.userman

During installation, you could have created a local user for your system. With the YaST module User and Group Management you can add more users or edit existing ones. It also lets you configure your system to authenticate users with a network server.

13.1 User and Group Administration Dialog

To administer users or groups, start YaST and click Security and Users › User and Group Management. Alternatively, start the User and Group Administration dialog directly by running sudo yast2 users & from a command line.

YaST User and Group Administration
Figure 13.1: YaST User and Group Administration

Every user is assigned a system-wide user ID (UID). Apart from the users which can log in to your machine, there are also several system users for internal use only. Each user is assigned to one or more groups. Similar to system users, there are also system groups for internal use.

Depending on the set of users you choose to view and modify with, the dialog (local users, network users, system users), the main window shows several tabs. These allow you to execute the following tasks:

Managing User Accounts

From the Users tab create, modify, delete or temporarily disable user accounts as described in Section 13.2, “Managing User Accounts”. Learn about advanced options like enforcing password policies, using encrypted home directories, or managing disk quotas in Section 13.3, “Additional Options for User Accounts”.

Changing Default Settings

Local users accounts are created according to the settings defined on the Defaults for New Users tab. Learn how to change the default group assignment, or the default path and access permissions for home directories in Section 13.4, “Changing Default Settings for Local Users”.

Assigning Users to Groups

Learn how to change the group assignment for individual users in Section 13.5, “Assigning Users to Groups”.

Managing Groups

From the Groups tab, you can add, modify or delete existing groups. Refer to Section 13.6, “Managing Groups” for information on how to do this.

Changing the User Authentication Method

When your machine is connected to a network that provides user authentication methods like NIS or LDAP, you can choose between several authentication methods on the Authentication Settings tab. For more information, refer to Section 13.7, “Changing the User Authentication Method”.

For user and group management, the dialog provides similar functionality. You can easily switch between the user and group administration view by choosing the appropriate tab at the top of the dialog.

Filter options allow you to define the set of users or groups you want to modify: On the Users or Group tab, click Set Filter to view and edit users or groups according to certain categories, such as Local Users or LDAP Users, for example (if you are part of a network which uses LDAP). With Set Filter › Customize Filter you can also set up and use a custom filter.

Depending on the filter you choose, not all of the following options and functions will be available from the dialog.

13.2 Managing User Accounts

YaST offers to create, modify, delete or temporarily disable user accounts. Do not modify user accounts unless you are an experienced user or administrator.

Note
Note: Changing User IDs of Existing Users

File ownership is bound to the user ID, not to the user name. After a user ID change, the files in the user's home directory are automatically adjusted to reflect this change. However, after an ID change, the user no longer owns the files he created elsewhere in the file system unless the file ownership for those files are manually modified.

In the following, learn how to set up default user accounts. For further options, refer to Section 13.3, “Additional Options for User Accounts”.

Procedure 13.1: Adding or Modifying User Accounts
  1. Open the YaST User and Group Administration dialog and click the Users tab.

  2. With Set Filter define the set of users you want to manage. The dialog lists users in the system and the groups the users belong to.

  3. To modify options for an existing user, select an entry and click Edit.

    To create a new user account, click Add.

  4. Enter the appropriate user data on the first tab, such as Username (which is used for login) and Password. This data is sufficient to create a new user. If you click OK now, the system will automatically assign a user ID and set all other values according to the default.

  5. Activate Receive System Mail if you want any kind of system notifications to be delivered to this user's mailbox. This creates a mail alias for root and the user can read the system mail without having to first log in as root.

    The mails sent by system services are stored in the local mailbox /var/spool/mail/USERNAME, where USERNAME is the login name of the selected user. To read e-mails, you can use the mail command.

  6. To adjust further details such as the user ID or the path to the user's home directory, do so on the Details tab.

    If you need to relocate the home directory of an existing user, enter the path to the new home directory there and move the contents of the current home directory with Move to New Location. Otherwise, a new home directory is created without any of the existing data.

  7. To force users to regularly change their password or set other password options, switch to Password Settings and adjust the options. For more details, refer to Section 13.3.2, “Enforcing Password Policies”.

  8. If all options are set according to your wishes, click OK.

  9. Click OK to close the administration dialog and to save the changes. A newly added user can now log in to the system using the login name and password you created.

    Alternatively, to save all changes without exiting the User and Group Administration dialog, click Expert Options › Write Changes Now.

Tip
Tip: Matching User IDs

For a new (local) user on a laptop which also needs to integrate into a network environment where this user already has a user ID, it is useful to match the (local) user ID to the ID in the network. This ensures that the file ownership of the files the user creates offline is the same as if he had created them directly on the network.

Procedure 13.2: Disabling or Deleting User Accounts
  1. Open the YaST User and Group Administration dialog and click the Users tab.

  2. To temporarily disable a user account without deleting it, select the user from the list and click Edit. Activate Disable User Login. The user cannot log in to your machine until you enable the account again.

  3. To delete a user account, select the user from the list and click Delete. Choose if you also want to delete the user's home directory or if you want to retain the data.

13.3 Additional Options for User Accounts

In addition to the settings for a default user account, SUSE® Linux Enterprise Desktop offers further options, such as options to enforce password policies, use encrypted home directories or define disk quotas for users and groups.

13.3.1 Automatic Login and Passwordless Login

If you use the GNOME desktop environment you can configure Auto Login for a certain user and Passwordless Login for all users. Auto login causes a user to become automatically logged in to the desktop environment on boot. This functionality can only be activated for one user at a time. Login without password allows all users to log in to the system after they have entered their user name in the login manager.

Warning
Warning: Security Risk

Enabling Auto Login or Passwordless Login on a machine that can be accessed by more than one person is a security risk. Without the need to authenticate, any user can gain access to your system and your data. If your system contains confidential data, do not use this functionality.

to activate auto login or login without password, access these functions in the YaST User and Group Administration with Expert Options › Login Settings.

13.3.2 Enforcing Password Policies

On any system with multiple users, it is a good idea to enforce at least basic password security policies. Users should change their passwords regularly and use strong passwords that cannot easily be exploited. For local users, proceed as follows:

Procedure 13.3: Configuring Password Settings
  1. Open the YaST User and Group Administration dialog and select the Users tab.

  2. Select the user for which to change the password options and click Edit.

  3. Switch to the Password Settings tab. The user's last password change is displayed on the tab.

  4. To make the user change his password at next login, activate Force Password Change.

  5. To enforce password rotation, set a Maximum Number of Days for the Same Password and a Minimum Number of Days for the Same Password.

  6. To remind the user to change his password before it expires, set the number of Days before Password Expiration to Issue Warning.

  7. To restrict the period of time the user can log in after his password has expired, change the value in Days after Password Expires with Usable Login.

  8. You can also specify a certain expiration date for the complete account. Enter the Expiration Date in YYYY-MM-DD format. Note that this setting is not password-related but rather applies to the account itself.

  9. For more information about the options and about the default values, click Help.

  10. Apply your changes with OK.

13.3.3 Managing Encrypted Home Directories

To protect data in home directories against theft and hard disk removal, you can create encrypted home directories for users. These are encrypted with LUKS (Linux Unified Key Setup), which results in an image and an image key being generated for the user. The image key is protected with the user's login password. When the user logs in to the system, the encrypted home directory is mounted and the contents are made available to the user.

With YaST, you can create encrypted home directories for new or existing users. To encrypt or modify encrypted home directories of already existing users, you need to know the user's current login password. By default, all existing user data is copied to the new encrypted home directory, but it is not deleted from the unencrypted directory.

Warning
Warning: Security Restrictions

Encrypting a user's home directory does not provide strong security from other users. If strong security is required, the system should not be physically shared.

Find background information about encrypted home directories and which actions to take for stronger security in Section 11.2, “Using Encrypted Home Directories”.

Procedure 13.4: Creating Encrypted Home Directories
  1. Open the YaST User and Group Management dialog and click the Users tab.

  2. To encrypt the home directory of an existing user, select the user and click Edit.

    Otherwise, click Add to create a new user account and enter the appropriate user data on the first tab.

  3. In the Details tab, activate Use Encrypted Home Directory. With Directory Size in MB, specify the size of the encrypted image file to be created for this user.

  4. Apply your settings with OK.

  5. Enter the user's current login password to proceed if YaST prompts for it.

  6. Click OK to close the administration dialog and save the changes.

    Alternatively, to save all changes without exiting the User and Group Administration dialog, click Expert Options › Write Changes Now.

Procedure 13.5: Modifying or Disabling Encrypted Home Directories

Of course, you can also disable the encryption of a home directory or change the size of the image file at any time.

  1. Open the YaST User and Group Administration dialog in the Users view.

  2. Select a user from the list and click Edit.

  3. to disable the encryption, switch to the Details tab and disable Use Encrypted Home Directory.

    If you need to enlarge or reduce the size of the encrypted image file for this user, change the Directory Size in MB.

  4. Apply your settings with OK.

  5. Enter the user's current login password to proceed if YaST prompts for it.

  6. Click OK to close the administration dialog and save the changes.

    Alternatively, to save all changes without exiting the User and Group Administration dialog, click Expert Options › Write Changes Now.

13.3.4 Managing Quotas

To prevent system capacities from being exhausted without notification, system administrators can set up quotas for users or groups. Quotas can be defined for one or more file systems and restrict the amount of disk space that can be used and the number of inodes (index nodes) that can be created there. Inodes are data structures on a file system that store basic information about a regular file, directory, or other file system object. They store all attributes of a file system object (like user and group ownership, read, write, or execute permissions), except file name and contents.

SUSE Linux Enterprise Desktop allows usage of soft and hard quotas. Additionally, grace intervals can be defined that allow users or groups to temporarily violate their quotas by certain amounts.

Soft Quota

Defines a warning level at which users are informed that they are nearing their limit. Administrators will urge the users to clean up and reduce their data on the partition. The soft quota limit is usually lower than the hard quota limit.

Hard Quota

Defines the limit at which write requests are denied. When the hard quota is reached, no more data can be stored and applications may crash.

Grace Period

Defines the time between the overflow of the soft quota and a warning being issued. Usually set to a rather low value of one or several hours.

Procedure 13.6: Enabling Quota Support for a Partition

To configure quotas for certain users and groups, you need to enable quota support for the respective partition in the YaST Expert Partitioner first.

  1. In YaST, select System › Partitioner and click Yes to proceed.

  2. In the Expert Partitioner, select the partition for which to enable quotas and click Edit.

  3. Click Fstab Options and activate Enable Quota Support. If the quota package is not already installed, it will be installed once you confirm the respective message with Yes.

  4. Confirm your changes and leave the Expert Partitioner.

  5. Make sure the service quotaon is running by entering the following command:

    systemctl status quotaon

    It should be marked as being active. If this is not the case, start it with the command systemctl start quotaon.

Procedure 13.7: Setting Up Quotas for Users or Groups

Now you can define soft or hard quotas for specific users or groups and set time periods as grace intervals.

  1. In the YaST User and Group Administration, select the user or the group you want to set the quotas for and click Edit.

  2. On the Plug-Ins tab, select the Manage User Quota entry and click Launch to open the Quota Configuration dialog.

  3. From File System, select the partition to which the quota should apply.

  4. Below Size Limits, restrict the amount of disk space. Enter the number of 1 KB blocks the user or group may have on this partition. Specify a Soft Limit and a Hard Limit value.

  5. Additionally, you can restrict the number of inodes the user or group may have on the partition. Below Inodes Limits, enter a Soft Limit and Hard Limit.

  6. You can only define grace intervals if the user or group has already exceeded the soft limit specified for size or inodes. Otherwise, the time-related text boxes are not activated. Specify the time period for which the user or group is allowed to exceed the limits set above.

  7. Confirm your settings with OK.

  8. Click OK to close the administration dialog and save the changes.

    Alternatively, to save all changes without exiting the User and Group Administration dialog, click Expert Options › Write Changes Now.

SUSE Linux Enterprise Desktop also ships command-line tools like repquota or warnquota. System administrators can use this tools to control the disk usage or send e-mail notifications to users exceeding their quota. Using quota_nld, administrators can also forward kernel messages about exceeded quotas to D-BUS. For more information, refer to the repquota, the warnquota and the quota_nld man page.

13.4 Changing Default Settings for Local Users

When creating new local users, several default settings are used by YaST. These include, for example, the primary group and the secondary groups the user belongs to, or the access permissions of the user's home directory. You can change these default settings to meet your requirements:

  1. Open the YaST User and Group Administration dialog and select the Defaults for New Users tab.

  2. To change the primary group the new users should automatically belong to, select another group from Default Group.

  3. To modify the secondary groups for new users, add or change groups in Secondary Groups. The group names must be separated by commas.

  4. If you do not want to use /home/USERNAME as default path for new users' home directories, modify the Path Prefix for Home Directory.

  5. To change the default permission modes for newly created home directories, adjust the umask value in Umask for Home Directory. For more information about umask, refer to Chapter 10, Access Control Lists in Linux and to the umask man page.

  6. For information about the individual options, click Help.

  7. Apply your changes with OK.

13.5 Assigning Users to Groups

Local users are assigned to several groups according to the default settings which you can access from the User and Group Administration dialog on the Defaults for New Users tab. In the following, learn how to modify an individual user's group assignment. If you need to change the default group assignments for new users, refer to Section 13.4, “Changing Default Settings for Local Users”.

Procedure 13.8: Changing a User's Group Assignment
  1. Open the YaST User and Group Administration dialog and click the Users tab. It lists users and the groups the users belong to.

  2. Click Edit and switch to the Details tab.

  3. To change the primary group the user belongs to, click Default Group and select the group from the list.

  4. To assign the user additional secondary groups, activate the corresponding check boxes in the Additional Groups list.

  5. Click OK to apply your changes.

  6. Click OK to close the administration dialog and save the changes.

    Alternatively, to save all changes without exiting the User and Group Administration dialog, click Expert Options › Write Changes Now.

13.6 Managing Groups

With YaST you can also easily add, modify or delete groups.

Procedure 13.9: Creating and Modifying Groups
  1. Open the YaST User and Group Management dialog and click the Groups tab.

  2. With Set Filter define the set of groups you want to manage. The dialog lists groups in the system.

  3. To create a new group, click Add.

  4. To modify an existing group, select the group and click Edit.

  5. In the following dialog, enter or change the data. The list on the right shows an overview of all available users and system users which can be members of the group.

  6. To add existing users to a new group select them from the list of possible Group Members by checking the corresponding box. To remove them from the group deactivate the box.

  7. Click OK to apply your changes.

  8. Click OK to close the administration dialog and save the changes.

    Alternatively, to save all changes without exiting the User and Group Administration dialog, click Expert Options › Write Changes Now.

To delete a group, it must not contain any group members. To delete a group, select it from the list and click Delete. Click OK to close the administration dialog and save the changes. Alternatively, to save all changes without exiting the User and Group Administration dialog, click Expert Options › Write Changes Now.

13.7 Changing the User Authentication Method

When your machine is connected to a network, you can change the authentication method. The following options are available:

NIS

Users are administered centrally on a NIS server for all systems in the network. For details, see Chapter 3, Using NIS.

SSSD

The System Security Services Daemon (SSSD) can locally cache user data and then allow users to use the data, even if the real directory service is (temporarily) unreachable. For details, see Section 4.3, “SSSD”.

Samba

SMB authentication is often used in mixed Linux and Windows networks. For details, see Chapter 27, Samba and Chapter 7, Active Directory Support.

To change the authentication method, proceed as follows:

  1. Open the User and Group Administration dialog in YaST.

  2. Click the Authentication Settings tab to show an overview of the available authentication methods and the current settings.

  3. To change the authentication method, click Configure and select the authentication method you want to modify. This takes you directly to the client configuration modules in YaST. For information about the configuration of the appropriate client, refer to the following sections:

    NIS:  Section 3.2, “Configuring NIS Clients”

    LDAP:  Section 4.1, “Configuring an Authentication Server”

    Samba:  Section 27.4.1, “Configuring a Samba Client with YaST”

  4. After accepting the configuration, return to the User and Group Administration overview.

  5. Click OK to close the administration dialog.

14 Changing Language and Country Settings with YaST

  • Filename: yast2_lang.xml
  • ID: cha.y2.lang

Working in different countries or having to work in a multilingual environment requires your computer to be set up to support this. SUSE® Linux Enterprise Desktop can handle different locales in parallel. A locale is a set of parameters that defines the language and country settings reflected in the user interface.

The main system language was selected during installation and keyboard and time zone settings were adjusted. However, you can install additional languages on your system and determine which of the installed languages should be the default.

For those tasks, use the YaST language module as described in Section 14.1, “Changing the System Language”. Install secondary languages to get optional localization if you need to start applications or desktops in languages other than the primary one.

Apart from that, the YaST timezone module allows you to adjust your country and timezone settings accordingly. It also lets you synchronize your system clock against a time server. For details, refer to Section 14.2, “Changing the Country and Time Settings”.

14.1 Changing the System Language

Depending on how you use your desktop and whether you want to switch the entire system to another language or only the desktop environment itself, there are several ways to do this:

Changing the System Language Globally

Proceed as described in Section 14.1.1, “Modifying System Languages with YaST” and Section 14.1.2, “Switching the Default System Language” to install additional localized packages with YaST and to set the default language. Changes are effective after the next login. To ensure that the entire system reflects the change, reboot the system or close and restart all running services, applications, and programs.

Changing the Language for the Desktop Only

Provided you have previously installed the desired language packages for your desktop environment with YaST as described below, you can switch the language of your desktop using the desktop's control center. Refer to Section 3.2.2, “Configuring Language Settings” for details. After the X server has been restarted, your entire desktop reflects your new choice of language. Applications not belonging to your desktop framework are not affected by this change and may still appear in the language that was set in YaST.

Temporarily Switching Languages for One Application Only

You can also run a single application in another language (that has already been installed with YaST). To do so, start it from the command line by specifying the language code as described in Section 14.1.3, “Switching Languages for Standard X and GNOME Applications”.

14.1.1 Modifying System Languages with YaST

YaST knows two different language categories:

Primary Language

The primary language set in YaST applies to the entire system, including YaST and the desktop environment. This language is used whenever available unless you manually specify another language.

Secondary Languages

Install secondary languages to make your system multilingual. Languages installed as secondary languages can be selected manually for a specific situation. For example, use a secondary language to start an application in a certain language to do word processing in this language.

Before installing additional languages, determine which of them should be the default system language (primary language).

To access the YaST language module, start YaST and click System › Language. Alternatively, start the Languages dialog directly by running sudo yast2 language & from a command line.

Procedure 14.1: Installing Additional Languages

When installing additional languages, YaST also allows you to set different locale settings for the user root, see Step 4. The option Locale Settings for User root determines how the locale variables (LC_*) in the file /etc/sysconfig/language are set for root. You can either set them to the same locale as for normal users, keep it unaffected by any language changes or only set the variable RC_LC_CTYPE to the same values as for the normal users. This variable sets the localization for language-specific function calls.

  1. To add additional languages in the YaST language module, select the Secondary Languages you want to install.

  2. To make a language the default language, set it as Primary Language.

  3. Additionally, adapt the keyboard to the new primary language and adjust the time zone, if appropriate.

    Tip
    Tip: Advanced Settings

    For advanced keyboard or time zone settings, select Hardware › System Keyboard Layout or System › Date and Time in YaST to start the respective dialogs. For more information, refer to Section 8.1, “Setting Up Your System Keyboard Layout” and Section 14.2, “Changing the Country and Time Settings”.

  4. To change language settings specific to the user root, click Details.

    1. Set Locale Settings for User root to the desired value. For more information, click Help.

    2. Decide if you want to Use UTF-8 Encoding for root or not.

  5. If your locale was not included in the list of primary languages available, try specifying it with Detailed Locale Setting. However, some localization may be incomplete.

  6. Confirm your changes in the dialogs with OK. If you have selected secondary languages, YaST installs the localized software packages for the additional languages.

The system is now multilingual. However, to start an application in a language other than the primary one, you need to set the desired language explicitly as explained in Section 14.1.3, “Switching Languages for Standard X and GNOME Applications”.

14.1.2 Switching the Default System Language

  1. To globally switch the default system language, start the YaST language module.

  2. Select the desired new system language as Primary Language.

    Important
    Important: Deleting Former System Languages

    If you switch to a different primary language, the localized software packages for the former primary language will be removed from the system. To switch the default system language but keep the former primary language as additional language, add it as Secondary Language by enabling the respective check box.

  3. Adjust the keyboard and time zone options as desired.

  4. Confirm your changes with OK.

  5. After YaST has applied the changes, restart current X sessions (for example, by logging out and logging in again) to make YaST and the desktop applications reflect your new language settings.

14.1.3 Switching Languages for Standard X and GNOME Applications

After you have installed the respective language with YaST, you can run a single application in another language.

Start the application from the command line by using the following command:

LANG=LANGUAGE application

For example, to start f-spot in German, run LANG=de_DE f-spot. For other languages, use the appropriate language code. Get a list of all language codes available with the locale  -av command.

14.2 Changing the Country and Time Settings

Using the YaST date and time module, adjust your system date, clock and time zone information to the area you are working in. To access the YaST module, start YaST and click System › Date and Time. Alternatively, start the Clock and Time Zone dialog directly by running sudo yast2 timezone & from a command line.

First, select a general region, such as Europe. Choose an appropriate country that matches the one you are working in, for example, Germany.

Depending on which operating systems run on your workstation, adjust the hardware clock settings accordingly:

  • If you run another operating system on your machine, such as Microsoft Windows*, it is likely your system does not use UTC, but local time. In this case, deactivate Hardware Clock Set To UTC.

  • If you only run Linux on your machine, set the hardware clock to UTC and have the switch from standard time to daylight saving time performed automatically.

Important
Important: Set the Hardware Clock to UTC

The switch from standard time to daylight saving time (and vice versa) can only be performed automatically when the hardware clock (CMOS clock) is set to UTC. This also applies if you use automatic time synchronization with NTP, because automatic synchronization will only be performed if the time difference between the hardware and system clock is less than 15 minutes.

Since a wrong system time can cause serious problems (missed backups, dropped mail messages, mount failures on remote file systems, etc.) it is strongly recommended to always set the hardware clock to UTC.

You can change the date and time manually or opt for synchronizing your machine against an NTP server, either permanently or only for adjusting your hardware clock.

Procedure 14.2: Manually Adjusting Time and Date
  1. In the YaST timezone module, click Other Settings to set date and time.

  2. Select Manually and enter date and time values.

  3. Confirm your changes.

Procedure 14.3: Setting Date and Time With NTP Server
  1. Click Other Settings to set date and time.

  2. Select Synchronize with NTP Server.

  3. Enter the address of an NTP server, if not already populated.

  4. Click Synchronize Now to get your system time set correctly.

  5. To use NTP permanently, enable Save NTP Configuration.

  6. With the Configure button, you can open the advanced NTP configuration. For details, see Section 25.1, “Configuring an NTP Client with YaST”.

  7. Confirm your changes.

Part VI Updating and Upgrading SUSE Linux Enterprise

15 Life Cycle and Support

This chapter provides background information on terminology, SUSE product lifecycles and Service Pack releases, and recommended upgrade policies.

16 Upgrading SUSE Linux Enterprise

SUSE® Linux Enterprise (SLE) allows to upgrade an existing system to the new version, for example, going from SLE 11 SP4 to the latest SLE 12 service pack. No new installation is needed. Existing data, such as home and data directories and system configuration, is kept intact. You can update from a local CD or DVD drive or from a central network installation source.

This chapter explains how to manually upgrade your SUSE Linux Enterprise system, be it by DVD, network, an automated process, or SUSE Manager.

17 Upgrading Offline

This chapter describes how to upgrade an existing SUSE Linux Enterprise installation using YaST which is booted from an installation medium. The YaST installer can, for example, be started from a DVD, over the network, or from the hard disk the system resides on.

18 Upgrading Online

SUSE offers an intuitive graphical and a simple command line tool to upgrade a running system to a new service pack. They provide support for rollback of service packs and more. This chapter explains how to do a service pack upgrade step by step with these tools.

19 Backporting Source Code

SUSE extensively uses backports, for example for the migration of current software fixes and features into released SUSE Linux Enterprise packages. The information in this chapter explains why it can be misleading to compare version numbers to judge the capabilities and the security of SUSE Linux Enterprise software packages. This chapter also explains how SUSE keeps the system software secure and current while maintaining compatibility for your application software on top of SUSE Linux Enterprise products. You will also learn how to check which public security issues actually are addressed in your SUSE Linux Enterprise system software, and the current status of your software.

15 Life Cycle and Support

  • Filename: sle_update_background.xml
  • ID: cha.update.background
Abstract

This chapter provides background information on terminology, SUSE product lifecycles and Service Pack releases, and recommended upgrade policies.

15.1 Terminology

This section uses several terms. To understand the information, read the definitions below:

Backporting

Backporting is the act of adapting specific changes from a newer version of software and applying it to an older version. The most commonly used case is fixing security holes in older software components. Usually it is also part of a maintenance model to supply enhancements or (less commonly) new features.

Delta RPM

A delta RPM consists only of the binary diff between two defined versions of a package, and therefore has the smallest download size. Before being installed, the full RPM package is rebuilt on the local machine.

Downstream

A metaphor of how software is developed in the open source world (compare it with upstream). The term downstream refers to people or organizations like SUSE who integrate the source code from upstream with other software to build a distribution which is then used by end users. Thus, the software flows downstream from its developers via the integrators to the end users.

Extensions, Add-On Products

Extensions and third party add-on products provide additional functionality of product value to SUSE Linux Enterprise Desktop. They are provided by SUSE and by SUSE partners, and they are registered and installed on top of the base product SUSE Linux Enterprise Desktop.

LTSS

LTSS is the abbreviation for Long Term Service Pack Support, which is available as an extension for SUSE Linux Enterprise Desktop.

Major Release, General Availability (GA) Version

The major release of SUSE Linux Enterprise (or any software product) is a new version which brings new features and tools, decommissions previously deprecated components and comes with backward-incompatible changes. Major releases for example are SUSE Linux Enterprise 11 or 12.

Migration

Updating to a Service Pack (SP) by using the online update tools or an installation medium to install the respective patches. It updates all packages of the installed system to the latest state.

Migration Targets

Set of compatible products to which a system can be migrated, containing the version of the products/extensions and the URL of the repository. Migration targets can change over time and depend on installed extensions. Multiple migration targets can be selected, for example SLE 12 SP2 and SES2 or SLE 12 SP2 and SES3.

Modules

Modules are fully supported parts of SUSE Linux Enterprise Desktop with a different life cycle. They have a clearly defined scope and are delivered via online channel only. Registering at the SUSE Customer Center, SMT (Subscription Management Tool), or SUSE Manager is a prerequisite for being able to subscribe to these channels.

Package

A package is a compressed file in rpm format that contains all files for a particular program, including optional components like configuration, examples, and documentation.

Patch

A patch consists of one or more packages and may be applied by means of delta RPMs. It may also introduce dependencies to packages that are not installed yet.

Service Packs (SP)

Combines several patches into a form that is easy to install or deploy. Service packs are numbered and usually contain security fixes, updates, upgrades, or enhancements of programs.

Upstream

A metaphor of how software is developed in the open source world (compare it with downstream). The term upstream refers to the original project, author or maintainer of a software that is distributed as source code. Feedback, patches, feature enhancements, or other improvements flow from end users or contributors to upstream developers. They decide if the request will be integrated or rejected.

If the project members decide to integrate the request, it will show up in newer versions of the software. An accepted request will benefit all parties involved.

If a request is not accepted, it may be for different reasons. Either it is in a state that is not compliant with the project's guidelines, it is invalid, it is already integrated, or it is not in the interest or roadmap of the project. An unaccepted request makes it harder for upstream developers as they need to synchronize their patches with the upstream code. This practice is generally avoided, but sometimes it is still needed.

Update

Installation of a newer minor version of a package, which usually contains security or bug fixes.

Upgrade

Installation of a newer major version of a package or distribution, which brings new features.

15.2 Product Life Cycle

SUSE has the following life cycle for products:

  • SUSE Linux Enterprise Server has a 13-year life cycle: 10 years of general support and 3 years of extended support.

  • SUSE Linux Enterprise Desktop has a 10-year life cycle: 7 years of general support and 3 years of extended support.

  • Major releases are made every 4 years. Service packs are made every 12-14 months.

SUSE supports previous service packs for 6 months after the release of the new service pack. Figure 15.1, “Major Releases and Service Packs” depicts some mentioned aspects.

Major Releases and Service Packs
Figure 15.1: Major Releases and Service Packs

If you need additional time to design, validate and test your upgrade plans, Long Term Service Pack Support can extend the support you get by an additional 12 to 36 months in 12-month increments, giving you a total of between 2 and 5 years of support on any service pack (see Figure 15.2, “Long Term Service Pack Support”).

Long Term Service Pack Support
Figure 15.2: Long Term Service Pack Support

For more information refer to https://www.suse.com/products/long-term-service-pack-support/.

For the life cycles of all products refer to https://www.suse.com/lifecycle/.

15.3 Module Life Cycles

With SUSE Linux Enterprise 12, SUSE introduces modular packaging. The modules are distinct sets of packages grouped into their own maintenance channel and updated independently of service pack life cycles. This allows you to get timely and easy access to the latest technology in areas where innovation is occurring at a rapid pace. For information about the life cycles of modules refer to https://scc.suse.com/docs/lifecycle/sle/12/modules.

15.4 Generating Periodic Life Cycle Report

SUSE Linux Enterprise Desktop can regularly check for changes in the support status of all installed products and send the report via e-mail in case of changes. To generate the report, install the zypper-lifecycle-plugin with zypper in zypper-lifecycle-plugin.

Enable the report generation on your system with systemctl:

root # systemctl enable lifecycle-report

The recipient and subject of the report e-mail, as well as the report generation period can be configured in the file /etc/sysconfig/lifecycle-report with any text editor. The settings MAIL_TO and MAIL_SUBJ define the mail recipient and subject, while DAYS sets the interval at which the report is generated.

The report displays changes in the support status after the change occurred and not in advance. If the change occurs right after the generation of the last report, it can take up to 14 days until you are notified of the change. Take this into account when setting the DAYS option. Change the following configuration entries to fit your requirements:

MAIL_TO='root@localhost'
MAIL_SUBJ='Lifecycle report'
DAYS=14

The latest report is available in the file /var/lib/lifecycle/report. The file contains two sections. The first section informs about the end of support for used products. The second section lists packages with their support end dates and update availability.

15.5 Support Levels

The range for extended support levels starts from year 10 and ends in year 13. These contain continued L3 engineering level diagnosis and reactive critical bug fixes. With these support levels, you will receive updates for trivially exploitable root exploits in the kernel and other root exploits directly executable without user interaction. Furthermore, they support existing workloads, software stacks, and hardware with limited package exclusion list. Find an overview in Table 15.1, “Security Updates and Bug Fixes”.

Table 15.1: Security Updates and Bug Fixes
 

General Support for Most Recent Service Pack (SP)

General Support for Previous SP, with LTSS

Extended Support with LTSS

Feature

Year 1-5

Year 6-7

Year 8-10

Year 4-10

Year 10-13

Technical Services

Yes

Yes

Yes

Yes

Yes

Access to Patches and Fixes

Yes

Yes

Yes

Yes

Yes

Access to Documentation and Knowledge Base

Yes

Yes

Yes

Yes

Yes

Support for Existing Stacks and Workloads

Yes

Yes

Yes

Yes

Yes

Support for New Deployments

Yes

Yes

Limited (Based on partner and customer requests)

Limited (Based on partner and customer requests)

No

Enhancement Requests

Yes

Limited (Based on partner and customer requests)

Limited (Based on partner and customer requests)

No

No

Hardware Enablement and Optimization

Yes

Limited (Based on partner and customer requests)

Limited (Based on partner and customer requests)

No

No

Driver updates via SUSE SolidDriver Program (formerly PLDP)

Yes

Yes

Limited (Based on partner and customer requests)

Limited (Based on partner and customer requests)

No

Backport of Fixes from Recent SP

Yes

Yes

Limited (Based on partner and customer requests)

N/A

N/A

Critical Security Updates

Yes

Yes

Yes

Yes

Yes

Defect Resolution

Yes

Yes

Limited (Severity Level 1 and 2 defects only)

Limited (Severity Level 1 and 2 defects only)

Limited (Severity Level 1 and 2 defects only)

15.6 Repository Model

The repository layout corresponds to the product lifecycles. The following sections contain a list of all relevant repositories.

Description of Required Repositories
Updates

Maintenance updates to packages in the corresponding Core or Pool repository.

Pool

Containing all binary RPMs from the installation media, plus pattern information and support status metadata.

Description of Optional Repositories
Debuginfo-Pool, Debuginfo-Updates

These repositories contain static content. Of these two, only the Debuginfo-Updates repository receives updates. Enable these repositories if you need to install libraries with debug information in case of an issue.

15.6.1 Required Repositories for SUSE Linux Enterprise Server

SLES 11 SP3
SLES11-SP3-Pool
SLES11-SP3-Updates
SLES 11 SP4
SLES11-SP4-Pool
SLES11-SP4-Updates
SLES 12
SLES12-GA-Pool
SLES12-GA-Updates
SLES 12 SP1
SLES12-SP1-Pool
SLES12-SP1-Updates
SLES 12 SP2
SLES12-SP2-Pool
SLES12-SP2-Updates
SLES 12 SP3
SLES12-SP3-Pool
SLES12-SP3-Updates

15.6.2 Optional Repositories for SUSE Linux Enterprise Server

SLES 11 SP3
SLES11-SP3-Debuginfo-Core
SLES11-SP3-Debuginfo-Updates
SLES11-SP3-Extension-Store
SLES11-SP3-Extra
SLES 12
SLES12-GA-Debuginfo-Core
SLES12-GA-Debuginfo-Updates
SLES 12 SP1
SLES12-SP1-Debuginfo-Core
SLES12-SP1-Debuginfo-Updates
SLES 12 SP2
SLES12-SP2-Debuginfo-Core
SLES12-SP2-Debuginfo-Updates
SLES 12 SP3
SLES12-SP3-Debuginfo-Core
SLES12-SP3-Debuginfo-Updates

15.6.3 Module-Specific Repositories for SUSE Linux Enterprise Server

The following listing contains only the core repositories for each module, but not Debuginfo or Source repositories.

Modules Available for SLES 12 GA/SP1/SP2/SP3
  • Advanced Systems Management Module: CFEngine, Puppet and the Machinery tool

    SLE-Module-Adv-Systems-Management12-Pool
    SLE-Module-Adv-Systems-Management12-Updates
  • Containers Module: Docker, tools, prepackaged images

    SLE-Module-Containers12-Pool
    SLE-Module-Containers12-Updates
  • Legacy Module: Sendmail, old IMAP stack, old Java, … (not available on AArch64)

    SLE-Module-Legacy12-Pool
    SLE-Module-Legacy12-Updates
  • Public Cloud Module: public cloud initialization code and tools

    SLE-Module-Public-Cloud12-Pool
    SLE-Module-Public-Cloud12-Updates
  • Toolchain Module: GNU Compiler Collection (GCC)

    SLE-Module-Toolchain12-Pool
    SLE-Module-Toolchain12-Updates
  • Web and Scripting Module: PHP, Python, Ruby on Rails

    SLE-Module-Web-Scripting12-Pool
    SLE-Module-Web-Scripting12-Updates
Modules Available for SLES 12 GA/SP1
  • Certifications Module: FIPS 140-2 certification-specific packages (not available on AArch64 and POWER)

    SLE-Module-Certifications12-Pool
    SLE-Module-Certifications12-Updates
Modules Available for SLES 12 SP2/SP3
  • HPC Module: tools and libraries related to High Performance Computing

    SLE-Module-HPC12-Pool
    SLE-Module-HPC12-Updates

15.6.4 Required Repositories for SUSE Linux Enterprise Desktop

SLED 11 SP3
SLED11-SP3-Pool
SLED11-SP3-Updates
SLED 11 SP4
SLED11-SP4-Pool
SLED11-SP4-Updates
SLED 12
SLED12-GA-Pool
SLED12-GA-Updates
SLED 12 SP1
SLED12-SP1-Pool
SLED12-SP1-Updates
SLED 12 SP2
SLED12-SP2-Pool
SLED12-SP2-Updates
SLED 12 SP3
SLED12-SP3-Pool
SLED12-SP3-Updates

15.6.5 Optional Repositories for SUSE Linux Enterprise Desktop

SLED 11 SP3
SLED11-SP3-Debuginfo-Core
SLED11-SP3-Debuginfo-Updates
SLED11-SP3-Extension-Store
SLED11-SP3-Extra
SLED 12
SLED12-GA-Debuginfo-Core
SLED12-GA-Debuginfo-Updates
SLED 12 SP1
SLED12-SP1-Debuginfo-Core
SLED12-SP1-Debuginfo-Updates
SLED 12 SP2
SLED12-SP2-Debuginfo-Core
SLED12-SP2-Debuginfo-Updates
SLED 12 SP3
SLED12-SP3-Debuginfo-Core
SLED12-SP3-Debuginfo-Updates

15.6.6 Origin of Packages

SUSE Linux Enterprise 11 SP3/SP4.  With the update to SP3 there are only two repositories available: SLED11-SP3-Pool and SLED11-SP3-Updates. Since SP4, any previous repositories are not visible anymore.

SUSE Linux Enterprise 12 and SP1/SP2.  With the update to SUSE Linux Enterprise 12 there are only two repositories available: SLED12-GA-Pool and SLED12-GA-Updates. Any previous repositories from SUSE Linux Enterprise 11 are not visible anymore.

15.6.7 Register and Unregister Repositories with SUSEConnect

On registration, the system receives repositories from the SUSE Customer Center (see https://scc.suse.com/) or a local registration proxy like SMT. The repository names map to specific URIs in the customer center. To list all available repositories on your system, use zypper as follows:

root # zypper repos -u

This gives you a list of all available repositories on your system. Each repository is listed by its alias, name and whether it is enabled and will be refreshed. The option -u gives you also the URI from where it originated.

To register your machine, run SUSEConnect, for example:

root # SUSEConnect -r REGCODE

If you want to unregister your machine, from SP1 and above you can use SUSEConnect too:

root # SUSEConnect --de-register

To check your locally installed products and their status, use the following command:

root # SUSEConnect -s

16 Upgrading SUSE Linux Enterprise

  • Filename: sle_update_upgrading.xml
  • ID: cha.update.sle
Abstract

SUSE® Linux Enterprise (SLE) allows to upgrade an existing system to the new version, for example, going from SLE 11 SP4 to the latest SLE 12 service pack. No new installation is needed. Existing data, such as home and data directories and system configuration, is kept intact. You can update from a local CD or DVD drive or from a central network installation source.

This chapter explains how to manually upgrade your SUSE Linux Enterprise system, be it by DVD, network, an automated process, or SUSE Manager.

16.1 Supported Upgrade Paths to SLE 12 SP3

Important
Important: Cross-architecture Upgrades Are Not Supported

Cross-architecture upgrades, such as upgrading from a 32-bit version of SUSE Linux Enterprise Desktop to the 64-bit version, or upgrading from big endian to little endian are not supported!

Specifically, SLE 11 on POWER (big endian) to SLE 12 SP2 on POWER (new: little endian!), is not supported.

Also, since SUSE Linux Enterprise 12 is 64-bit only, upgrades from any 32-bit SUSE Linux Enterprise 11 systems to SUSE Linux Enterprise 12 and later are not supported.

To make a cross-architecture upgrade, you need to perform a new installation.

Before you perform any migration, read Section 16.3, “Preparing the System”.

Note
Note: Skipping Service Packs

Skipping Service Packs on SUSE Linux Enterprise Desktop is not supported. You need to consecutively install all Service Packs.

Note
Note: Upgrading Major Releases

We recommend to do a fresh install when upgrading to a new Major Release, for example from SUSE Linux Enterprise 11 to SUSE Linux Enterprise 12.

Upgrading from SUSE Linux Enterprise 10 (any Service Pack)

There is no supported direct migration path to SUSE Linux Enterprise 12. We recommend a fresh installation in this case.

Upgrading from SUSE Linux Enterprise 11 GA / SP1 / SP2 / SP3

There is no supported direct migration path to SUSE Linux Enterprise 12. You need at least SLE 11 SP4 before you can proceed to SLE 12 SP3.

If you cannot do a fresh install, first upgrade your installed SLE 11 Service Pack to SLE 11 SP4. These steps are described in the SUSE Linux Enterprise 11 Deployment Guide.

Upgrading from SUSE Linux Enterprise 11 SP4

Upgrading from SLE 11 SP4 to SLE 12 SP3 is only supported via an offline upgrade. Refer to Section 16.2, “Online and Offline Upgrade” for details.

Upgrading from SUSE Linux Enterprise 12 GA to SP3

A direct upgrade from SLE 12 GA to SP3 is not supported. Upgrade to SLE 12 SP2 first.

Upgrading from SUSE Linux Enterprise 12 SP1 to SP3

A direct upgrade from SLE 12 SP1 to SP3 is not supported. Upgrade to SLE 12 SP2 first.

Upgrading from SUSE Linux Enterprise 12 SP2 to SP3

Upgrading from SUSE Linux Enterprise 12 SP2 to SP3 is supported.

Upgrading from SUSE Linux Enterprise 12 LTSS GA / SP1 LTSS / SP2 to SP3

Updating any previous SLE 12 LTSS version to SP3 is supported.

16.2 Online and Offline Upgrade

SUSE supports two different upgrade and migration methods. For more information about the terminology, see Section 15.1, “Terminology”. The methods are:

Online

All upgrades that are executed from the running system are considered to be online. Examples: Connected through SUSE Customer Center, Subscription Management Tool (SMT), SUSE Manager using Zypper or YaST.

When migrating between Service Packs of the same major release, we suggest following Section 18.4, “Upgrading with the Online Migration Tool (YaST)” or Section 18.5, “Upgrading with Zypper”.

Offline

Offline methods usually boot another operating system from which the installed SLE version is upgraded. Examples are: DVD, flash disk, ISO image, AutoYaST, plain RPM or PXE boot.

Upgrading from SUSE Linux Enterprise 11 SP4 to SUSE Linux Enterprise 12 SP3 is only supported via an offline upgrade. See Chapter 17, Upgrading Offline

Upgrading from any SUSE Linux Enterprise 12 LTSS version or SUSE Linux Enterprise 12 SP1 or SP2 to SP3 is supported using all offline and online methods. See Chapter 17, Upgrading Offline and Chapter 18, Upgrading Online.

16.3 Preparing the System

Before starting the upgrade procedure, make sure your system is properly prepared. Among others, preparation involves backing up data and checking the release notes.

16.3.1 Make Sure the Current System is Up-To-Date

Upgrading the system is only supported from the most recent patch-level. Make sure the latest system updates are installed by either running zypper patch or by starting the YaST module Online-Update.

16.3.2 Read the Release Notes

In the release notes you can find additional information on changes since the previous release of SUSE Linux Enterprise Desktop. Check the release notes to see whether:

  • your hardware needs special considerations;

  • any used software packages have changed significantly;

  • special precautions are necessary for your installation.

The release notes also provide information that could not make it into the manual on time. They also contain notes about known issues.

Find the release notes locally in the directory /usr/share/doc/release-notes or online at https://www.suse.com/releasenotes/.

16.3.3 Make a Backup

Before updating, copy existing configuration files to a separate medium (such as tape device, removable hard disk, etc.) to back up the data. This primarily applies to files stored in /etc and some directories and files in /var and /opt. You may also want to write the user data in /home (the HOME directories) to a backup medium. Back up this data as root. Only root has read permissions for all local files.

If you have selected Update an Existing System as the installation mode in YaST, you can choose to do a (system) backup at a later point in time. You can choose to include all modified files and files from the /etc/sysconfig directory. However, this is not a complete backup, as all the other important directories mentioned above are missing. Find the backup in the /var/adm/backup directory.

16.3.3.1 Listing Installed Packages and Repositories

It is often useful to have a list of installed packages, for example when doing a fresh install of a new major SLE release or reverting to the old version.

Be aware that not all installed packages or used repositories are available in newer releases of SUSE Linux Enterprise. Some may have been renamed and others replaced. It is also possible that some packages are still available for legacy purposes while another package is used by default. Therefore some manual editing of the files might be necessary. This can be done with any text editor.

Create a file named repositories.bak containing a list of all used repositories:

root # zypper lr -e repositories.bak

Also create a file named installed-software.bak containing a list of all installed packages:

root # rpm -qa --queryformat '%{NAME}\n' > installed-software.bak

Back up both files. The repositories and installed packages can be restored with the following commands:

root # zypper ar repositories.bak
root # zypper install $(cat installed-software.bak)
Note
Note: Amount of Packages Increases with an Update to a new Major Release

A system upgraded to a new major version (SLE X+1) may contain more packages than the initial system (SLE X). It will also contain more packages than a fresh installation of SLE X+1 with the same pattern selection. Reasons for this are:

  • Packages got split to allow a more fine-grained package selection. For example, 37 texlive packages on SLE 11 were split into 422 packages on SLE 12.

  • When a package got split into other packages all new packages are installed in the upgrade case to retain the same functionality as with the previous version. However, the new default for a fresh installation of SLE X+1 may be to not install all packages.

  • Legacy packages from SLE X may be kept for compatibility reasons.

  • Package dependencies and the scope of patterns may have changed.

16.3.4 Migrate your MySQL Database

As of SUSE Linux Enterprise 12, SUSE switched from MySQL to MariaDB. Before you start any upgrade, it is highly recommended to back up your database.

To perform the database migration, do the following:

  1. Log in to your SUSE Linux Enterprise 11 machine.

  2. Create a dump file:

    root # mysqldump -u root -p --all-databases > mysql_backup.sql

    By default, mysqldump does not dump the INFORMATION_SCHEMA or performance_schema database. For more details refer to https://dev.mysql.com/doc/refman/5.5/en/mysqldump.html.

  3. Store your dump file, the configuration file /etc/my.cnf, and the directory /etc/mysql/ for later investigation (NOT installation!) in a safe place.

  4. Perform your upgrade. After the upgrade, your former configuration file /etc/my.cnf is still intact. You can find the new configuration in the file /etc/my.cnf.rpmnew.

  5. Configure your MariaDB database to your needs. Do NOT use the former configuration file and directory, but use it as a reminder and adapt it.

  6. Make sure you start the MariaDB server:

    root # systemctl start mysql

    If you want to start the MariaDB server on every boot, enable the service:

    root # systemctl enable mysql
  7. Verify that MariaDB is running properly by connecting to the database:

    root # mysql -u root -p

16.3.5 Migrate your PostgreSQL Database

SLE11 SP3 and SLE12 GA get a newer version of the PostgreSQL database as a maintenance update. Because of the required migration work of the database, there is no automatic upgrade process. As such, the switch from one version to another needs to be done manually.

The migration process is conducted by the pg_upgrade command which is an alternative method of the classic dump and reload. In comparison with the dump & reload method, pg_upgrade makes the migration less time-consuming.

Each PostgreSQL version stores its files in different, version-dependent directories. After the update the directories will change to:

SLE11 SP3/SP4

/usr/lib/postgresql91/ to /usr/lib/postgresql94/

SLE12 GA

/usr/lib/postgresql93/ to /usr/lib/postgresql94/

To perform the database migration, do the following:

  1. Make sure the following preconditions are fulfilled:

    • If not already done, upgrade any package of the old PostgreSQL version to the latest release through a maintenance update.

    • Create a backup of your existing database.

    • Install the packages of the new PostgreSQL major version. For SLE12 this means to install postgresql94-server and all the packages it depends on.

    • Install the package postgresql94-contrib which contains the command pg_upgrade.

    • Make sure you have enough free space in your PostgreSQL data area, which is /var/lib/pgsql/data by default. If space is tight, try to reduce size with the following SQL command on each database (can take very long!):

      VACUUM FULL
  2. Stop the PostgreSQL server:

    root # /usr/sbin/rcpostgresql stop
  3. Rename your old data directory:

    root # mv /var/lib/pgsql/data /var/lib/pgsql/data.old
  4. Create a new data directory:

    root # mkdir -p /var/lib/pgsql/data
  5. If you have changed your configuration files in the old version, copy the files postgresql.conf pg_hba.conf to your new data directory:

    root # cp /var/lib/pgsql/data.old/*.conf \
         /var/lib/pgsql/data
  6. Initialize your new database instance either manually with initdb or by starting and stopping PostgreSQL, which will do it automatically:

    root # /usr/sbin/rcpostgresql start
    root # /usr/sbin/rcpostgresql stop
  7. Start the migration process and replace the OLD placeholder with the older version:

    root # pg_upgrade \
       --old-datadir "/var/lib/pgsql/data.old" \
       --new-datadir "/var/lib/pgsql/data" \
       --old-bindir "/usr/lib/postgresqlOLD/bin/" \
       --new-bindir "/usr/lib/postgresql94/bin/"
  8. Start your new database instance:

    root # /usr/sbin/rcpostgresql start
  9. Check if the migration was successful. There is no general tool to automate this step. It depends on your use case how much and what you want to test.

  10. Remove any old PostgreSQL packages and your old data directory:

    root # zypper search -s postgresqlOLD | xargs zypper rm -u
    root # rm -rf /var/lib/pgsql/data.old

16.3.6 Create Non-MD5 Server Certificates for Java Applications

During the update from SP1 to SP2, MD5-based certificates were disabled as part of a security fix. If you have certificates created as MD5, re-create your certificates with the following steps:

  1. Open a terminal and log in as root.

  2. Create a private key:

    root # openssl genrsa -out server.key 1024

    If you want a stronger key, replace 1024 with a higher number, for example, 4096.

  3. Create a certificate signing request (CSR):

    root # openssl req -new -key server.key -out server.csr
  4. Self-sign the certificate:

    root # openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
  5. Create the PEM file:

    root # cat server.key server.crt > server.pem
  6. Place the files server.crt, server.csr, server.key, and server.pem in the respective directories where the keys can be found. For Tomcat, for example, this directory is /etc/tomcat/ssl/.

16.3.7 Shut Down Virtual Machine Guests

If your machine serves as a VM Host Server for KVM or Xen, make sure to properly shut down all running VM Guests prior to the update. Otherwise you may not be able to access the guests after the update.

16.3.8 Check the clientSetup4SMT.sh Script on SMT Clients

If you are migrating your client OS that is registered against an SMT server, you need to check if the version of the clientSetup4SMT.sh script on your host is up to date. clientSetup4SMT.sh from older versions of SMT cannot manage SMT 12 clients. If you apply software patches regularly on your SMT server, you can always find the latest version of clientSetup4SMT.sh at <SMT_HOSTNAME>/repo/tools/clientSetup4SMT.sh.

16.3.9 Disk Space

Software tends to grow from version to version. Therefore, take a look at the available partition space before updating. If you suspect you are running short of disk space, back up your data before increasing the available space by resizing partitions, for example. There is no general rule regarding how much space each partition should have. Space requirements depend on your particular partitioning profile and the software selected.

Note
Note: Automatic Check for Enough Space in YaST

During the update procedure, YaST will check how much free disk space is available and display a warning to the user if the installation may exceed the available amount. In that case, performing the update may lead to an unusable system! Only if you know exactly what you are doing (by testing beforehand), you can skip the warning and continue the update.

16.3.9.1 Checking Disk Space on Non-Btrfs File Systems

Use the df command to list available disk space. For example, in Example 16.1, “List with df -h, the root partition is /dev/sda3 (mounted as /).

Example 16.1: List with df -h
Filesystem     Size  Used Avail Use% Mounted on
/dev/sda3       74G   22G   53G  29% /
tmpfs          506M     0  506M   0% /dev/shm
/dev/sda5      116G  5.8G  111G   5% /home
/dev/sda1       39G  1.6G   37G   4% /windows/C
/dev/sda2      4.6G  2.6G  2.1G  57% /windows/D

16.3.9.2 Checking Disk Space on Btrfs Root File Systems

If you use Btrfs as root file systems on your machine, make sure there is enough free space. In the worst case, an upgrade needs as much disk space as the current root file system (without /.snapshot) for a new snapshot. To display available disk space use the command:

root # df -h /

Check the available space on all other mounted partitions as well. The following recommendations have been proven:

  • For all file systems including Btrfs you need enough free disk space to download and install big RPMs. The space of old RPMs are only freed after new RPMs are installed.

  • For Btrfs with snapshots, you need at minimum as much free space as your current installation takes. We recommend to have twice as much free space as the current installation.

    If you do not have enough free space, you can try to delete old snapshots with snapper:

    root # snapper list
    root # snapper delete NUMBER

    However, this may not help in all cases. Before migration, most snapshots occupy only little space.

16.3.10 Temporarily Disabling Kernel Multiversion Support

SUSE Linux Enterprise Desktop allows installing multiple kernel versions by enabling the respective settings in /etc/zypp/zypp.conf. Support for this feature needs to be temporarily disabled to upgrade to a service pack. When the update has successfully finished, multiversion support can be re-enabled. To disable multiversion support, comment the respective lines in /etc/zypp/zypp.conf. The result should look similar to:

#multiversion = provides:multiversion(kernel)
#multiversion.kernels = latest,running

To re-activate this feature after a successful update, remove the comment signs. For more information about multiversion support, refer to Section 12.1, “Enabling and Configuring Multiversion Support”.

17 Upgrading Offline

  • Filename: sle_update_offline.xml
  • ID: cha.update.offline
Abstract

This chapter describes how to upgrade an existing SUSE Linux Enterprise installation using YaST which is booted from an installation medium. The YaST installer can, for example, be started from a DVD, over the network, or from the hard disk the system resides on.

17.1 Conceptual Overview

Before upgrading your system, read Section 16.3, “Preparing the System” first.

To upgrade your system, boot from an installation source, as you would do for a fresh installation. However, when the boot screen appears, you need to select Upgrade (instead of Installation). The upgrade can be started from:

17.2 Starting the Upgrade from Installation Medium

The procedure below describes booting from a DVD, but you can also use another local installation medium like an ISO image on a USB mass storage device. The medium and boot method to select depends on the system architecture and on whether the machine has a traditional BIOS or UEFI.

Procedure 17.1: Manually Upgrading from SLE 11 SP4 to SLE 12 SP3
  1. Select and prepare a boot medium, see Section 3.2, “System Start-up for Installation”.

  2. Insert DVD 1 of the SUSE Linux Enterprise 12 SP3 installation medium and boot your machine. A Welcome screen is displayed, followed by the boot screen.

  3. Start up the system by selecting Upgrade in the boot menu.

  4. Proceed with the upgrade process as described in Section 17.6, “Upgrading SUSE Linux Enterprise”.

17.3 Starting Upgrade from Network Source

To start an upgrade from a network installation source, make sure that the following requirements are met:

Requirements for Upgrading from a Network Installation Source
Network Installation Source

A network installation source is set up according to Chapter 5, Setting Up the Server Holding the Installation Sources.

Network Connection and Network Services

Both the installation server and the target machine must have a functioning network connection. Required network services are:

  • Domain Name Service

  • DHCP (only needed for booting via PXE, IP can be set manually during setup)

  • OpenSLP (optional)

Boot Medium

You have a SUSE Linux Enterprise Desktop DVD 1 (or a local ISO image) at hand to boot the target system or a target system that is set up for booting via PXE according to Section 6.5, “Preparing the Target System for PXE Boot”. Refer to Chapter 7, Remote Installation for in-depth information on starting the upgrade from a remote server.

17.3.1 Manually Upgrading via Network Installation Source—Booting from DVD

This procedure describes booting from a DVD as an example, but you can also use another local installation medium like an ISO image on a USB mass storage device. The way to select the boot method and to start up the system from the medium depends on the system architecture and on whether the machine has a traditional BIOS or UEFI. For details, see the links below.

  1. Insert DVD 1 of the SUSE Linux Enterprise 12 SP2 installation media and boot your machine. A Welcome screen is displayed, followed by the boot screen.

  2. Select the type of network installation source you want to use (FTP, HTTP, NFS, SMB, or SLP). Usually you get this choice by pressing F4, but in case your machine is equipped with UEFI instead of a traditional BIOS, you may need to manually adjust boot parameters. For details, see Installing from a Network Server in Chapter 3, Installation with YaST.

  3. Proceed with the upgrade process as described in Section 17.6, “Upgrading SUSE Linux Enterprise”.

17.3.2 Manually Upgrading via Network Installation Source—Booting via PXE

To perform an upgrade from a network installation source using PXE boot, proceed as follows:

  1. Adjust the setup of your DHCP server to provide the address information needed for booting via PXE. For details, see Section 6.5, “Preparing the Target System for PXE Boot”.

  2. Set up a TFTP server to hold the boot image needed for booting via PXE. Use DVD 1 of your SUSE Linux Enterprise 12 SP2 installation media for this or follow the instructions in Section 6.2, “Setting Up a TFTP Server”.

  3. Prepare PXE Boot and Wake-on-LAN on the target machine.

  4. Initiate the boot of the target system and use VNC to remotely connect to the installation routine running on this machine. For more information, see Section 7.3.1, “VNC Installation”.

  5. Proceed with the upgrade process as described in Section 17.6, “Upgrading SUSE Linux Enterprise”.

17.4 Starting Upgrade from Hard Disk

17.4.1 Automated Migration from SUSE Linux Enterprise 11 SP3 or SP4 to SUSE Linux Enterprise 12 SP2

  1. Copy the installation kernel linux and the file initrd from /boot/x86_64/loader/ from your first installation DVD to your system's /boot directory:

    root # cp -vi DVDROOT/boot/x86_64/loader/linux /boot/linux.upgrade
    root # cp -vi DVDROOT/boot/x86_64/loader/initrd /boot/initrd.upgrade

    DVDROOT denotes the path where your system mounts the DVD, usually /run/media/$USER/$DVDNAME.

  2. Open the GRUB legacy configuration file /boot/grub/menu.lst and add another section. For other boot loaders, edit the respective configuration file(s). Adjust device names and the root parameter accordingly. For example:

    title Linux Upgrade Kernel
    kernel (hd0,0)/boot/linux.upgrade root=/dev/sda1 upgrade=1 autoupgrade=1
    initrd (hd0,0)/boot/initrd.upgrade
  3. The following steps either need the DVD in the drive or the ISO image has to be present on the local disk. If the computer has no DVD drive, either download the ISO and save it as /install.iso or copy it from the DVD.

    root # dd if=/dev/cdrom of=/install.iso

    If you copied the ISO to your hard disk, you need to add a parameter to the previously edited /boot/grub/menu.lst. To the line beginning with kernel add the option

    install=hd:/install.iso
  4. Reboot your machine and select the newly added section from the boot menu (here: Linux Upgrade Kernel). You can use grubonce to preselect the newly created GRUB entry for an unattended automatic reboot into the newly created entry. You can also use reboot to initiate the reboot from the command line.

  5. Proceed with the upgrade process as described in Section 17.6, “Upgrading SUSE Linux Enterprise”.

  6. After the upgrade process was finished successfully, remove the installation kernel and initrd files (/boot/linux.upgrade and /boot/initrd.upgrade). They are not needed anymore.

17.5 Enabling Automatic Upgrade

The upgrade process can be executed automatically. To enable the automatic update, the kernel parameter autoupgrade=1 must be set. The parameter can be set on boot in the Boot Options field. For details, see https://www.suse.com/documentation/sles-12/book_autoyast/data/introduction.html.

17.6 Upgrading SUSE Linux Enterprise

Before you upgrade your system, read Section 16.3, “Preparing the System” first. To perform an automated migration, proceed as follows:

  1. After you have booted (either from an installation medium or the network), select the Upgrade entry on the boot screen. If you want to do the upgrade as described in the next steps manually, you need to disable the automatic upgrade process. Refer to Section 17.5, “Enabling Automatic Upgrade”.

    Warning
    Warning: Wrong Choice May Lead to Data Loss

    If you select Installation instead of Upgrade, data may be lost later. You need to be extra careful not to destroy your data partitions by doing a fresh installation.

    Make sure to select Upgrade here.

    YaST starts the installation system.

  2. On the Welcome screen, choose Language and Keyboard and accept the license agreement. Proceed with Next.

    YaST checks your partitions for already installed SUSE Linux Enterprise systems.

  3. On the Select for Upgrade screen, select the partition to upgrade and click Next.

    YaST mounts the selected partition and displays all repositories that have been found on the partition that you want to upgrade.

  4. On the Previously Used Repositories screen, adjust the status of the repositories: enable those you want to include in the upgrade process and disable any repositories that are no longer needed. Proceed with Next.

  5. On the Registration screen, select whether to register the upgraded system now (by entering your registration data and clicking Next) or if to Skip Registration. For details on registering your system, see Section 17.9, “Registering Your System”.

  6. Review the Installation Settings for the upgrade, especially the Update Options. Choose between the following options:

    • Only Update Installed Packages, in which case you might miss new features shipped with the latest SUSE Linux Enterprise version.

    • Update with Installation of New Software and Features. Click Select Patterns if you want to enable or disable patterns and packages according to your wishes.

    Note
    Note: Choice of Desktop

    If you used KDE before upgrading to SUSE Linux Enterprise 12 (DEFAULT_WM in /etc/sysconfig/windowmanager was set to kde*), your desktop environment will automatically be replaced with GNOME after the upgrade. By default, the KDM display manager will be replaced with GDM.

    To change the choice of desktop environment or window manager, adjust the software selection by clicking Select Patterns.

  7. If all settings are according to your wishes, start the installation and removal procedure by clicking Update.

  8. After the upgrade process was finished successfully, check for any orphaned packages. Orphaned packages are packages which belong to no active repository anymore. The following command gives you a list of these:

    zypper packages --orphaned

    With this list, you can decide if a package is still needed or can be uninstalled safely.

17.7 Updating via SUSE Manager

SUSE Manager is a server solution for providing updates, patches, and security fixes for SUSE Linux Enterprise clients. It comes with a set of tools and a Web-based user interface for management tasks. See https://www.suse.com/products/suse-manager/ for more information about SUSE Manager.

SUSE Manager can support you with SP Migration or a full system upgrade.

SP Migration

SP Migration allows migrating from one Service Pack (SP) to another within one major version (for example, from SLES 12 SP1 to 12 SP2). For more information, see the SUSE Manager Best Practices, chapter Client Migration, section Migrating SUSE Linux Enterprise Server 12 or later to version 12 SP2:

https://www.suse.com/documentation/suse-manager/, version 3.1.

System Upgrade

With SUSE Manager you can perform a system upgrade. With the integrated AutoYaST technology, upgrades from one major version to the next are possible (for example, from SLES 11 SP3 to 12 SP2). For more information, see the SUSE Manager Best Practices, chapter Client Migration, section Migrating SUSE Linux Enterprise 11 SP3 to version 12 SP2:

https://www.suse.com/documentation/suse-manager/, version 3.1.

17.8 Updating Registration Status after Rollback

When performing a service pack upgrade, it is necessary to change the configuration on the registration server to provide access to the new repositories. If the upgrade process is interrupted or reverted (via restoring from a backup or snapshot), the information on the registration server is inconsistent with the status of the system. This may lead to you being prevented from accessing update repositories or to wrong repositories being used on the client.

When a rollback is done via Snapper, the system will notify the registration server to ensure access to the correct repositories is set up during the boot process. If the system was restored any other way or the communication with the registration server failed for any reason (for example, because the server was not accessible because of network issues), trigger the rollback on the client manually by calling:

snapper rollback

We suggest always checking that the correct repositories are set up on the system, especially after refreshing the service using:

zypper ref -s

This functionality is available in the rollback-helper package.

17.9 Registering Your System

If you skipped the registration step during the installation, you can register your system at any time using the Product Registration module in YaST.

Registering your systems has these advantages:

  • Eligibility for support

  • Availability of security updates and bug fixes

  • Access to SUSE Customer Center

  1. Start YaST and select Software › Product Registration to open the Registration dialog.

  2. Provide the E-mail address associated with the SUSE account you or your organization uses to manage subscriptions. In case you do not have a SUSE account yet, go to the SUSE Customer Center home page (https://scc.suse.com/) to create one.

  3. Enter the Registration Code you received with your copy of SUSE Linux Enterprise Desktop.

  4. To start the registration, proceed with Next. If one or more local registration servers are available on your network, you can choose one of them from a list. Alternatively, to ignore the local registration servers and register with the default SUSE registration server, choose Cancel.

    During the registration, the online update repositories will be added to your upgrade setup. When finished, you can choose whether to install the latest available package versions from the update repositories. This provides a clean upgrade path for all packages and ensures that SUSE Linux Enterprise Desktop is upgraded with the latest security updates available. If you choose No, all packages will be installed from the installation media. Proceed with Next.

    After successful registration, YaST lists extensions, add-ons, and modules that are available for your system. To select and install them, proceed with Section 11.2, “Installing Modules and Extensions from Online Channels”.

18 Upgrading Online

  • Filename: sle_update_online.xml
  • ID: cha.update.online
Abstract

SUSE offers an intuitive graphical and a simple command line tool to upgrade a running system to a new service pack. They provide support for rollback of service packs and more. This chapter explains how to do a service pack upgrade step by step with these tools.

18.1 Conceptual Overview

SUSE releases new service packs for the SUSE Linux Enterprise family at regular intervals. To make it easy for customers to migrate to a new service pack and minimize downtime, SUSE supports migrating online while the system is running.

Starting with SLE 12, YaST Wagon has been replaced by YaST migration (GUI) and Zypper migration (command line). The following features are supported:

  • System always in a defined state until the first RPM is updated

  • Canceling is possible until the first RPM is updated

  • Simple recovery, if there is an error

  • Rollback via system tools; no backup/restore needed

  • Use of all active repositories

  • The ability to skip a service pack

18.2 Service Pack Migration Workflow

A service pack migration can be executed by either YaST, zypper, or AutoYaST.

Before you can start a service pack migration, your system must be registered at the SUSE Customer Center or a local SMT server. SUSE Manager can also be used.

Regardless of the method, a service pack migration consists of the following steps:

  1. Find possible migration targets on your registered systems.

  2. Select one migration target.

  3. Request and enable new repositories.

  4. Run the migration.

The list of migration targets depends on the products you have installed and registered. If you have an extension installed for which the new SP is not yet available, it could be that no migration target is offered to you.

The list of migration targets available for your host will always be retrieved from the SUSE Customer Center and depend on products or extensions installed.

18.3 Canceling Service Pack Migration

A service pack migration can only be cancelled at specific stages during the migration process:

  1. Until the package upgrade starts, there are only minimal changes on the system, like for services and repositories. Restore /etc/zypp/repos.d/* to revert to the former state.

  2. After the package upgrade starts, you can revert to the former state by using a Snapper snapshot (see Chapter 7, System Recovery and Snapshot Management with Snapper).

  3. After the migration target was selected, SUSE Customer Center changes the repository data. To revert this state manually, use SUSEConnect --rollback.

18.4 Upgrading with the Online Migration Tool (YaST)

To perform a service pack migration with YaST, use the Online Migration tool. By default, YaST does not install any packages from a third-party repository. If a package was installed from a third-party repository, YaST prevents packages from being replaced with the same package coming from SUSE.

Note
Note: Reduce Installation Size

When performing the SP migration, YaST will install all recommended packages. Especially in the case of custom minimal installations, this may increase the installation size of the system significantly.

To change this default behavior and allow only required packages, adjust /etc/zypp/zypp.conf and set the following variable:

solver.onlyRequires = true
installRecommends=false # or commented

This changes the behavior of all package operations, such as the installation of patches or new packages.

To start the service pack migration, do the following:

  1. Deactivate all unused extensions on your registration server to avoid future dependency conflicts. In case you forget an extension, YaST will later detect unused extension repositories and deactivate them.

  2. If you are logged in to a GNOME session running on the machine you are going to update, switch to a text console. Running the update from within a GNOME session is not recommended. Note that this does not apply when being logged in from a remote machine (unless you are running a VNC session with GNOME).

  3. If you are an LTSS subscriber, make sure that the LTSS extension repository is active.

  4. Run YaST online update to get the latest package updates for your system.

  5. Install the package yast2-migration and its dependencies (in YaST under Software › Software Management).

  6. Restart YaST, otherwise the newly installed module will not be shown in the control center.

  7. In YaST, choose Online Migration (depending on the version of SUSE Linux Enterprise Desktop that you are upgrading from, this module is categorized under either System or Software). YaST will show possible migration targets and a summary. If more than one migration target is available for your system, select one from the list.

  8. Select one migration target from the list and proceed with Next.

  9. In case the migration tool offers update repositories, it is recommended to proceed with Yes.

  10. If the Online Migration tool finds obsolete repositories coming from DVD or a local server, it is highly recommended to disable them. Obsolete repositories are from a previous SP. Any old repositories from SCC or SMT are removed automatically.

  11. Check the summary and proceed with the migration by clicking Next. Confirm with Start Update.

  12. After the successful migration restart your system.

18.5 Upgrading with Zypper

To perform a service pack migration with Zypper, use the command line tool zypper migration from the package zypper-migration-plugin.

Note
Note: Reduce Installation Size

When performing the SP migration, YaST will install all recommended packages. Especially in the case of custom minimal installations, this may increase the installation size of the system significantly.

To change this default behavior and allow only required packages, adjust /etc/zypp/zypp.conf and set the following variable:

solver.onlyRequires = true
installRecommends=false # or commented

This changes the behavior of all package operations, such as the installation of patches or new packages. To change the behavior of Zypper for a single invocation, add the parameter --no-recommends to your command line.

To start the service pack migration, do the following:

  1. If you are logged in to a GNOME session running on the machine you are going to update, switch to a text console. Running the update from within a GNOME session is not recommended. Note that this does not apply when being logged in from a remote machine (unless you are running a VNC session with GNOME).

  2. Register your SUSE Linux Enterprise machine if you have not done so:

    sudo SUSEConnect --regcode YOUR_REGISTRATION_CODE
  3. If you are an LTSS subscriber, make sure that the LTSS extension repository is active.

  4. Install the latest updates:

    sudo zypper patch
  5. Install the zypper-migration-plugin package and its dependencies:

    sudo zypper in zypper-migration-plugin
  6. Run zypper migration:

    tux > sudo zypper migration
    Executing 'zypper  patch-check'
    
    Refreshing service 'SUSE_Linux_Enterprise_Server_12_x86_64'.
    Loading repository data...
    Reading installed packages...
    0 patches needed (0 security patches)
    
    Available migrations:
    
        1 | SUSE Linux Enterprise Server 12 SP1 x86_64
        2 | SUSE Linux Enterprise Server 12 SP2 x86_64

    Some notes about the migration process:

    • If more than one migration target is available for your system, Zypper allows you to select one SP from the list. This is the same as skipping one or more SPs. Keep in mind, online migration for base products (SLES, SLED) remains available only between the SPs of a major version.

    • By default, Zypper uses the option --no-allow-vendor-change which is passed to zypper dup. If a package was installed from a third-party repository, this option prevents packages from being replaced with the same package coming from SUSE.

    • If Zypper finds obsolete repositories coming from DVD or a local server, it is highly recommended to disable them. Old SCC or SMT repositories are removed automatically.

  7. Review all the changes, especially the packages that are going to be removed. Proceed by typing y (the exact number of packages to upgrade can vary on your system):

    266 packages to upgrade, 54 to downgrade, 17 new, 8 to reinstall, 5 to remove, 1 to change arch.
    Overall download size: 285.1 MiB. Already cached: 0 B  After the operation, additional 139.8 MiB will be used.
    Continue? [y/n/? shows all options] (y):

    Use the ShiftPage ↑ or ShiftPage ↓ keys to scroll in your shell.

  8. After successful migration restart your system.

18.6 Upgrading with Plain Zypper

If you cannot use YaST migration or the Zypper migration, you can still migrate with plain Zypper and some manual interactions. To start a service pack migration, do the following:

  1. If you are logged in to a GNOME session running on the machine you are going to update, switch to a text console. Running the update from within a GNOME session is not recommended. Note that this does not apply when being logged in from a remote machine (unless you are running a VNC session with GNOME).

  2. Update the package management tools with the old SUSE Linux Enterprise repositories:

    sudo zypper patch --updatestack-only
  3. If the system is registered, it needs to be deregistered:

    sudo SUSEConnect --de-register
  4. Remove the old installation sources and repositories and adjust the third-party repositories.

  5. Add the new installation sources, be it local or remote sources (for the placeholder REPOSITORY, refer to Section 15.6, “Repository Model”):

    sudo zypper addrepo REPOSITORY

    You can also use SUSE Customer Center or Subscription Management Tool. The command for SUSE Linux Enterprise 12 SP1 on x86-64 is:

    sudo SUSEConnect -p SLES/12.2/x86_64 OPTIONS

    Keep in mind, cross-architecture upgrades are not supported.

    Zypper will display a conflict between the old and new kernel. Choose Solution 1 to continue.

    Problem: product:SLES-12.2-0.x86_64 conflicts with kernel < 4.4 provided by kernel-default-VERSION
     Solution 1: Following actions will be done:
      replacement of kernel-default-VERSION with kernel-default-VERSION
      deinstallation of kernel-default-VERSION
     Solution 2: do not install product:SLES-12.2-0.x86_64
  6. Finalize the migration:

    sudo zypper ref -f -s
    sudo zypper dup --no-allow-vendor-change --no-recommends

    The first command will update all services and repositories. The second command performs a distribution upgrade. Here, the last two options are important: -no-allow-vendor-change ensures that third-party RPMs will not overwrite RPMs from the base system. The option --no-recommends ensures that packages deselected during initial installation will not be added again.

18.7 Rolling Back a Service Pack

If a service pack does not work for you, SUSE Linux Enterprise supports reverting the system to the state before the service pack migration was started. Prerequisite is a Btrfs root partition with snapshots enabled (this is the default when installing SLES 12). See Chapter 7, System Recovery and Snapshot Management with Snapper for details.

  1. Get a list of all Snapper snapshots:

    sudo snapper list

    Review the output to locate the snapshot that was created immediately before the service pack migration was started. The column Description contains a corresponding statement and the snapshot is marked as important in the column Userdata. Memorize the snapshot number from the column # and its date from the column Date.

  2. Reboot the system. From the boot menu, select Start boot loader from a read-only snapshot and then choose the snapshot with the date and number you memorized in the previous step. A second boot menu (the one from the snapshot) is loaded. Select the entry starting with SLES 12 and boot it.

  3. The system boots into the previous state with the system partition mounted read-only. Log in as root and check whether you have chosen the correct snapshot. Also make sure everything works as expected. Note that since the root file system is mounted read-only, restrictions in functionality may apply.

    In case of problems or if you have booted the wrong snapshot, reboot and choose a different snapshot to boot from—up to this point no permanent changes have been made. If the snapshot is correct and works as expected, make the change permanent by running the following command:

    snapper rollback

    Reboot afterward. On the boot screen, choose the default boot entry to reboot into the reinstated system.

  4. Check if the repository configuration has been properly reset. Furthermore, check if all products are properly registered. If either one is not the case, updating the system at a later point in time may no longer work, or the system may be updated using the wrong package repositories.

    Make sure the system can access the Internet before starting this procedure.

    1. Refresh services and repositories by running

      sudo zypper ref -fs
    2. Get a list of active repositories by running

      sudo zypper lr

      Carefully check the output of this command. No services and repositories that were added for the update should be listed. If you, for example, are rolling back from a service pack migration from SLES 12 SP1 to SLES 12 SP2, the list must not contain the repositories SLES12-SP2-Pool and SLES12-SP2-Updates, but rather the SP1 versions.

      If wrong repositories are listed, delete them and, if necessary, replace them with the versions matching your product or service pack version. For a list of repositories for the supported migration paths refer to Section 15.6, “Repository Model”.

    3. Last, check the registration status for all products installed by running

      SUSEConnect --status

      All products should be reported as being Registered. If this is not the case, repair the registration by running

      SUSEConnect --rollback

Now you have successfully reverted the system to the state that was captured immediately before the service pack migration was started.

19 Backporting Source Code

  • Filename: sle_update_backporting.xml
  • ID: cha.update.backporting
Abstract

SUSE extensively uses backports, for example for the migration of current software fixes and features into released SUSE Linux Enterprise packages. The information in this chapter explains why it can be misleading to compare version numbers to judge the capabilities and the security of SUSE Linux Enterprise software packages. This chapter also explains how SUSE keeps the system software secure and current while maintaining compatibility for your application software on top of SUSE Linux Enterprise products. You will also learn how to check which public security issues actually are addressed in your SUSE Linux Enterprise system software, and the current status of your software.

19.1 Reasons for Backporting

Upstream developers are primarily concerned with advancing the software they develop. Often they combine fixing bugs with introducing new features which have not yet received extensive testing and which may introduce new bugs.

For distribution developers, it is important to distinguish between:

  • bugfixes with a limited potential for disrupting functionality; and

  • changes that may disrupt existing functionality.

Usually, distribution developers do not follow all upstream changes when a package has become part of a released distribution. Usually they stick instead with the upstream version that they initially released and create patches based on upstream changes to fix bugs. This practice is known as backporting.

Distribution developers generally will only introduce a newer version of software in two cases:

  • when the changes between their packages and the upstream versions have become so large that backporting is no longer feasible, or

  • for software that inherently ages badly, like anti-malware software.

SUSE uses backports extensively as we strike a good balance between several concerns for enterprise software. The most important of them are:

  • Having stable interfaces (APIs) that software vendors can rely on when building products for use on SUSE's enterprise products.

  • Ensuring that packages used in the release of SUSE's enterprise products are of the highest quality and have been thoroughly tested, both in themselves and as part of the whole enterprise product.

  • Maintaining the various certifications of SUSE's enterprise products by other vendors, like certifications for Oracle or SAP products.

  • Allowing SUSE's developers to focus on making the next version of the product as good as they can make it, rather than them having to spread their focus thinly across a wide range of releases.

  • Keeping a clear view of what is in a particular enterprise release, so that our support can provide accurate and timely information about it.

19.2 Reasons against Backports

It is a general policy rule that no new upstream versions of a package are introduced into our enterprise products. This rule is not an absolute rule however. For certain types of packages, in particular anti-virus software, security concerns weigh heavier than the conservative approach that is preferable from the perspective of quality assurance. For packages in that class, occasionally newer versions are introduced into a released version of an enterprise product line.

Sometimes also for other types of packages the choice is made to introduce a new version rather than a backport. This is done when producing a backport is not economically feasible or when there is a very relevant technical reason to introduce the newer version.

19.3 The Implications of Backports for Interpreting Version Numbers

Because of the practice of backporting, one cannot simply compare version numbers to determine whether a SUSE package contains a fix for a particular issue or has had a particular feature added to it. With backporting, the upstream part of a SUSE package's version number merely indicates what upstream version the SUSE package is based on. It may contain bug fixes and features that are not in the corresponding upstream release, but that have been backported into the SUSE package.

One particular area where this limited value of version numbers when backporting is involved can cause problems is with security scanning tools. Some security vulnerability scanning tools (or particular tests in such tools) operate solely on version information. These tools and tests are therefore prone to generating false positives (when a piece of software is incorrectly identified as vulnerable) when backports are involved. When evaluating reports from security scanning tools, always check whether an entry is based on a version number or on an actual vulnerability test.

19.4 How to Check Which Bugs are Fixed and Which Features are Backported and Available

There are several locations where information regarding backported bug fixes and features are stored:

  • The package's changelog:

    rpm -q --changelog name-of-installed-package
    rpm -qp --changelog packagefile.rpm

    The output briefly documents the change history of the package.

  • The package changelog may contain entries like bsc#1234 (Bugzilla Suse.Com) that refer to bugs in SUSE's Bugzilla tracking system or links to other bugtracking systems. Because of confidentiality policies, not all such information may be accessible to you.

  • A package may contain a /usr/share/doc/PACKAGENAME/README.SUSE file which contains general, high-level information specific to the SUSE package.

  • The RPM source package contains the patches that were applied during the building of the regular binary RPMs as separate files that can be interpreted if you are familiar with reading source code. See Section 6.1.2.5, “Installing or Downloading Source Packages” for installing sources of SUSE Linux Enterprise software, see Section 6.2.5, “Installing and Compiling Source Packages” for building packages on SUSE Linux Enterprise and see the Maximum RPM book for the inner workings of SUSE Linux Enterprise software package builds.

  • For security bug fixes, consult the SUSE security announcements. These often refer to bugs through standardized names like CAN-2005-2495 which are maintained by the Common Vulnerabilities and Exposures (CVE) project.

A Documentation Updates

  • Filename: deployment_docupdates.xml
  • ID: app.deployment.docupdates

This chapter lists content changes for this document.

This manual was updated on the following dates:

A.1 January 2018 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP3)

General
Chapter 4, Cloning Disk Images

Added section about cleaning up cloned disk images, see Chapter 4, Cloning Disk Images. (FATE#321159).

Section 11.1, “List of Optional Modules”

Added a new section listing all optional modules.

Chapter 13, Managing Users with YaST
Bugfixes

A.2 September 2017 (Initial Release of SUSE Linux Enterprise Desktop 12 SP3)

General
Chapter 6, Preparing the Boot of the Target System
Chapter 11, Installing Modules, Extensions, and Third Party Add-On Products
Chapter 15, Life Cycle and Support
Chapter 16, Upgrading SUSE Linux Enterprise
Chapter 17, Upgrading Offline
Chapter 18, Upgrading Online
Bugfixes

A.3 April 2017 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP2)

Section 15.3, “Module Life Cycles”
Bugfixes

A.4 November 2016 (Initial Release of SUSE Linux Enterprise Desktop 12 SP2)

General
  • The e-mail address for documentation feedback has changed to doc-team@suse.com.

  • The documentation for Docker has been enhanced and renamed to Docker Guide.

General Changes to this Guide
  • The complete guide has been revised, restructured, and flattened (Fate #319115).

Chapter 3, Installation with YaST
Chapter 15, Life Cycle and Support
Chapter 18, Upgrading Online
Bugfixes

A.5 March 2016 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP1)

Chapter 9, Advanced Disk Setup

A.6 December 2015 (Initial Release of SUSE Linux Enterprise Desktop 12 SP1)

General
  • SMT Guide is now part of the documentation for SUSE Linux Enterprise Desktop.

  • Add-ons provided by SUSE have been renamed as modules and extensions. The manuals have been updated to reflect this change.

  • Numerous small fixes and additions to the documentation, based on technical feedback.

  • The registration service has been changed from Novell Customer Center to SUSE Customer Center.

  • In YaST, you will now reach Network Settings via the System group. Network Devices is gone (https://bugzilla.suse.com/show_bug.cgi?id=867809).

Chapter 3, Installation with YaST
Chapter 10, Installing or Removing Software
Chapter 11, Installing Modules, Extensions, and Third Party Add-On Products
  • Updated chapter to reflect the software changes to the former YaST SUSE Customer Center Configuration dialog (now called Product Registration) and the YaST Add-On Products module (Fate #318800).

Chapter 9, Advanced Disk Setup
  • Mentioned that subvolumes for /var/lib/mariadb, /var/lib/pgsql, and /var/lib/libvirt/images are created with the option no copy on write by default to avoid extensive fragmenting with Btrfs.

Subscription Management
Part VI, “Updating and Upgrading SUSE Linux Enterprise”
Bugfixes

A.7 February 2015 (Documentation Maintenance Update)

Section 3.10, “Clock and Time Zone”

With NTP disabled it is recommended to avoid writing system time to the hardware clock. Thus set SYSTOHC=no.

Bugfixes

A.8 October 2014 (Initial Release of SUSE Linux Enterprise Desktop 12)

General
  • Removed all KDE documentation and references because KDE is no longer shipped.

  • Removed all references to SuSEconfig, which is no longer supported (Fate #100011).

  • Move from System V init to systemd (Fate #310421). Updated affected parts of the documentation.

  • YaST Runlevel Editor has changed to Services Manager (Fate #312568). Updated affected parts of the documentation.

  • Removed all references to ISDN support, as ISDN support has been removed (Fate #314594).

  • Removed all references to the YaST DSL module as it is no longer shipped (Fate #316264).

  • Removed all references to the YaST Modem module as it is no longer shipped (Fate #316264).

  • Btrfs has become the default file system for the root partition (Fate #315901). Updated affected parts of the documentation.

  • The dmesg now provides human-readable time stamps in ctime()-like format (Fate #316056). Updated affected parts of the documentation.

  • syslog and syslog-ng have been replaced by rsyslog (Fate #316175). Updated affected parts of the documentation.

  • MariaDB is now shipped as the relational database instead of MySQL (Fate #313595). Updated affected parts of the documentation.

  • SUSE-related products are no longer available from http://download.novell.com but from http://download.suse.com. Adjusted links accordingly.

  • Novell Customer Center has been replaced with SUSE Customer Center. Updated affected parts of the documentation.

  • /var/run is mounted as tmpfs (Fate #303793). Updated affected parts of the documentation.

  • The following architectures are no longer supported: IA64 and x86. Updated affected parts of the documentation.

  • The traditional method for setting up the network with ifconfig has been replaced by wicked. Updated affected parts of the documentation.

  • A lot of networking commands are deprecated and have been replaced by newer commands (usually ip). Updated affected parts of the documentation.

    arp: ip neighbor
    ifconfig: ip addr, ip link
    iptunnel: ip tunnel
    iwconfig: iw
    nameif: ip link, ifrename
    netstat: ss, ip route, ip -s link, ip maddr
    route: ip route
  • Numerous small fixes and additions to the documentation, based on technical feedback.

Chapter 3, Installation with YaST
Chapter 16, Upgrading SUSE Linux Enterprise
Chapter 8, Setting Up Hardware Components with YaST
  • Removed the following sections as the respective YaST modules are no longer included: Hardware Information, Setting Up Graphics Card and Monitor, Mouse Model, and Setting Up a Scanner.

  • Removed content about mouse setup and adjusted Section 8.1, “Setting Up Your System Keyboard Layout”.

Chapter 10, Installing or Removing Software
Chapter 11, Installing Modules, Extensions, and Third Party Add-On Products
Subscription Management
  • For registering clients against an SMT server, suse_register has been replaced with SUSEConnect (Fate #316585).

Bugfixes
SUSE Linux Enterprise Desktop 12 SP3

GNOME User Guide

Introduces the GNOME desktop of SUSE Linux Enterprise Desktop. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.

Publication Date: May 07, 2018
About This Guide
Available Documentation
Feedback
Documentation Conventions
I Introduction
1 Getting Started with the GNOME Desktop
1.1 Logging In
1.2 Desktop Basics
1.3 Pausing or Finishing Your Session
2 Working with Your Desktop
2.1 Managing Files and Directories
2.2 Accessing Removable Media
2.3 Searching for Files
2.4 Copying Text Between Applications
2.5 Managing Internet Connections
2.6 Exploring the Internet
2.7 E-mail and Scheduling
2.8 Opening or Creating Documents with LibreOffice
2.9 Controlling Your Desktop’s Power Management
2.10 Creating, Displaying, and Decompressing Archives
2.11 Taking Screenshots
2.12 Viewing PDF Files
2.13 Obtaining Software Updates
2.14 For More Information
3 Customizing Your Settings
3.1 The GNOME Settings Dialog
3.2 Personal
3.3 Hardware
3.4 System
4 Assistive Technologies
4.1 Enabling Assistive Technologies
4.2 Visual Impairments
4.3 Hearing Impairments
4.4 Mobility Impairments
4.5 For More Information
II Connectivity, Files and Resources
5 Accessing Network Resources
5.1 Connecting to a Network
5.2 General Notes on File Sharing and Network Browsing
5.3 Accessing Network Shares
5.4 Sharing Directories
5.5 Managing Windows Files
5.6 Configuring and Accessing a Windows Network Printer
6 Managing Printers
6.1 Installing a Printer
7 Backing Up User Data
7.1 Creating Backups
7.2 Restoring Data
8 Passwords and Keys: Signing and Encrypting Data
8.1 Signing and Encryption
8.2 Generating a New Key Pair
8.3 Modifying Key Properties
8.4 Importing Keys
8.5 Exporting Keys
8.6 Signing a Key
8.7 Password Keyrings
8.8 Key Servers
8.9 Key Sharing
9 gFTP: Transferring Data from the Internet
9.1 ASCII Compared to Binary Transfers
9.2 Connecting to a Remote Server
9.3 Transferring Files
9.4 Setting Up an HTTP Proxy Server
9.5 For More Information
III LibreOffice
10 LibreOffice: The Office Suite
10.1 LibreOffice Modules
10.2 Starting LibreOffice
10.3 The LibreOffice User Interface
10.4 Compatibility with Other Office Applications
10.5 Saving Files with a Password
10.6 Signing Documents
10.7 Customizing LibreOffice
10.8 Changing the Global Settings
10.9 Using Templates
10.10 Setting Metadata and Properties
10.11 For More Information
11 LibreOffice Writer
11.1 Creating a New Document
11.2 Formatting with Styles
11.3 Working with Large Documents
11.4 Using Writer as an HTML Editor
12 LibreOffice Calc
12.1 Creating a New Document
12.2 Using Formatting and Styles in Calc
12.3 Working With Sheets
12.4 Conditional Formatting
12.5 Grouping and Ungrouping Cells
12.6 Freezing Rows or Columns as Headers
13 LibreOffice Impress, Base, Draw, and Math
13.1 Using Presentations with Impress
13.2 Using Databases with Base
13.3 Creating Graphics with Draw
13.4 Creating Mathematical Formulas with Math
IV Internet, Communication and Collaboration
14 Firefox: Browsing the Web
14.1 Starting Firefox
14.2 Navigating Web Sites
14.3 Finding Information
14.4 Managing Bookmarks
14.5 Using the Download Manager
14.6 Security
14.7 Customizing Firefox
14.8 Printing from Firefox
14.9 For More Information
15 Evolution: E-Mailing and Calendaring
15.1 Starting Evolution
15.2 Setup Assistant
15.3 Using Evolution
15.4 For More Information
16 Empathy: Instant Messaging
16.1 Starting Empathy
16.2 Configuring Accounts
16.3 Managing Contacts
16.4 Chatting with Friends
16.5 For More Information
17 Ekiga: Using Voice over IP
17.1 Starting Ekiga
17.2 Configuring Ekiga
17.3 The Ekiga User Interface
17.4 Making a Call
17.5 Answering a Call
17.6 Using the Address Book
17.7 For More Information
V Graphics and Multimedia
18 GIMP: Manipulating Graphics
18.1 Graphics Formats
18.2 Starting GIMP
18.3 User Interface Overview
18.4 Getting Started
18.5 Saving and Exporting Images
18.6 Editing Images
18.7 Printing Images
18.8 For More Information
19 GNOME Videos
19.1 Using GNOME Videos
19.2 Modifying GNOME Videos Preferences
20 Brasero: Burning CDs and DVDs
20.1 Creating a Data CD or DVD
20.2 Creating an Audio CD
20.3 Copying a CD or DVD
20.4 Writing ISO Images
20.5 Creating a Multisession CD or DVD
20.6 For More Information
A Help and Documentation
A.1 Using GNOME Help
A.2 Additional Help Resources
A.3 For More Information
B Documentation Updates
B.1 September 2017 (Initial Release of SUSE Linux Enterprise Desktop 12 SP3)
B.2 November 2016 (Initial Release of SUSE Linux Enterprise Desktop 12 SP2)
B.3 December 2015 (Initial Release of SUSE Linux Enterprise Desktop 12 SP1)
B.4 October 2014 (Initial Release of SUSE Linux Enterprise Desktop 12)
C GNU Licenses
C.1 GNU Free Documentation License
List of Examples
11.1 Use of Styles

Copyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

About This Guide

  • Filename: gnomeuser_intro.xml
  • ID: pre.gnome.about

This manual introduces you to the GNOME graphical desktop environment as implemented in SUSE® Linux Enterprise Desktop, and shows you how to configure it to meet your personal needs and preferences. It also introduces you to several programs and services. It is intended for users who have experience using a graphical desktop environment such as macOS*, Windows*, or other Linux desktops.

The manual is divided into the following parts:

Introduction

Get to know your GNOME desktop, learn how to cope with basic and daily tasks using the central GNOME applications and various small utilities. Get an overview of the possibilities that GNOME offers for modifying and individualizing the desktop according to your needs and wishes. Learn how to use assistive technologies to improve accessibility in case of vision or mobility impairment.

Connectivity, Files and Resources

Learn how to manage and exchange data on your system or on a network: connecting to a network and sharing files, managing printers, or creating backups of your data. This part also shows how to sign and encrypt your mails and documents and how to use file transfer clients to transfer data from or to the Internet.

LibreOffice

Introduces the LibreOffice suite, including Writer, Calc, Impress, Base, Draw, and Math.

Internet, Communication and Collaboration

Use a Web browser and get to know the e-mailing and calendaring software. Communicate with others using Instant Messaging or Voice over IP.

Graphics and Multimedia

Get to know GIMP, an image manipulation program that meets the needs of both amateurs and professionals. Get introduced to your desktop's applications for playing movies. Learn how to create data or audio CDs and DVDs for archiving your data.

1 Available Documentation

  • Filename: common_intro_available_doc_i.xml
  • ID: no ID found
Note
Note: Online Documentation and Latest Updates

Documentation for our products is available at http://www.suse.com/documentation/, where you can also find the latest updates, and browse or download the documentation in various formats.

In addition, the product documentation is usually available in your installed system under /usr/share/doc/manual.

The following documentation is available for this product:

Installation Quick Start

Lists the system requirements and guides you step-by-step through the installation of SUSE Linux Enterprise Desktop from DVD, or from an ISO image.

Deployment Guide

Shows how to install single or multiple systems and how to exploit the product inherent capabilities for a deployment infrastructure. Choose from various approaches, ranging from a local installation or a network installation server to a mass deployment using a remote-controlled, highly-customized, and automated installation technique.

Administration Guide

Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.

Security Guide

Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.

System Analysis and Tuning Guide

An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.

GNOME User Guide

Introduces the GNOME desktop of SUSE Linux Enterprise Desktop. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.

2 Feedback

  • Filename: common_intro_feedback_i.xml
  • ID: no ID found

Several feedback channels are available:

Bugs and Enhancement Requests

For services and support options available for your product, refer to http://www.suse.com/support/.

Help for openSUSE is provided by the community. Refer to https://en.opensuse.org/Portal:Support for more information.

To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click Create New.

User Comments

We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/documentation/feedback.html and enter your comments there.

Mail

For feedback on the documentation of this product, you can also send a mail to doc-team@suse.com. Make sure to include the document title, the product version and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).

3 Documentation Conventions

  • Filename: common_intro_typografie_i.xml
  • ID: no ID found

The following notices and typographical conventions are used in this documentation:

  • /etc/passwd: directory names and file names

  • PLACEHOLDER: replace PLACEHOLDER with the actual value

  • PATH: the environment variable PATH

  • ls, --help: commands, options, and parameters

  • user: users or groups

  • package name : name of a package

  • Alt, AltF1: a key to press or a key combination; keys are shown in uppercase as on a keyboard

  • File, File › Save As: menu items, buttons

  • Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.

  • Commands that must be run with root privileges. Often you can also prefix these commands with the sudo command to run them as non-privileged user.

    root # command
    tux > sudo command
  • Commands that can be run by non-privileged users.

    tux > command
  • Notices

    Warning
    Warning: Warning Notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important
    Important: Important Notice

    Important information you should be aware of before proceeding.

    Note
    Note: Note Notice

    Additional information, for example about differences in software versions.

    Tip
    Tip: Tip Notice

    Helpful information, like a guideline or a piece of practical advice.

Part I Introduction

1 Getting Started with the GNOME Desktop

This section describes the conventions, layout, and common tasks of the GNOME desktop as implemented in your product.

2 Working with Your Desktop

In this chapter you will learn how to work with files and burn CDs. You will also find out how to perform regular tasks with your desktop.

3 Customizing Your Settings

You can change the way the GNOME desktop looks and behaves to suit your own personal tastes and needs. Some possible changes of settings are:

4 Assistive Technologies

The GNOME desktop includes assistive technologies to support users with various impairments and special needs, and to interact with common assistive devices. This chapter describes several assistive technology applications designed to meet the needs of users with physical disabilities like low vision or impaired motor skills.

1 Getting Started with the GNOME Desktop

  • Filename: gnome_start.xml
  • ID: cha.gnomeuser.start

This section describes the conventions, layout, and common tasks of the GNOME desktop as implemented in your product.

GNOME is an easy-to-use graphical interface that can be customized to meet your needs and personal preferences. This section describes the default configuration of GNOME. If you or your system administrator modify the defaults, some aspects might be different, such as appearance or key combinations.

Note
Note: Included Session Configurations

Some versions of SUSE Linux Enterprise ship with as many as three different session configurations based on GNOME. These are GNOME, GNOME Classic, and SLE Classic. The version described here is the default configuration of SUSE Linux Enterprise Desktop called SLE Classic.

1.1 Logging In

In general, all users must authenticate—unless Auto Login is enabled for a specific user. In this case, a particular user will be logged in automatically when the system starts. This can save some time, especially if a computer is used by a single person. It may impact account security. Auto Login can be enabled or disabled during installation or at any time using the YaST User and Group Management module. For more information, refer to Chapter 13, Managing Users with YaST.

If your computer is running in a network environment and you are not the only person using the machine, you are usually prompted to enter your user name and password when you start the system. If you did not set up the system and user account yourself, check with your system administrator for your user name and password.

GNOME Login Screen
Figure 1.1: GNOME Login Screen
Procedure 1.1: Normal Login
  1. If your user name is listed, click it.

    If your user name is not listed, click Not listed?. Then enter your user name and click Next.

  2. Enter your password and click Sign in.

1.1.1 Switching the Session Type Before Logging In

If you want to try one of the additional GNOME session configurations or try another desktop environment, follow the steps below.

  1. On the login screen, click your user name or enter it, as you normally would.

  2. To change the session type, click the cog wheel icon. A menu appears.

    GNOME Login Screen—Session Type
    Figure 1.2: GNOME Login Screen—Session Type
  3. From the menu, select one of the entries. Depending on your configuration there may be different choices, but the default selection is as follows.

    GNOME

    A GNOME 3 configuration that is very close to the upstream design. It focuses on interrupting users as little as possible. However, starting applications and switching between them works differently from many other desktop operating systems. It uses a single panel at the top of the screen.

    GNOME Classic

    A GNOME 3 configuration that is designed to appeal to former users of GNOME 2. The desktop has two panels, one at the top and another at the bottom.

    IceWM

    A very basic desktop designed to use little resources. It can be used as a fallback, if other options do not work or are slow.

    SLE Classic (default)

    The default desktop of SUSE Linux Enterprise, designed to appeal to users of older versions of SUSE Linux Enterprise and users of Microsoft* Windows*. This desktop is a GNOME 3 configuration and uses a single panel that is placed at the bottom of the screen.

  4. Enter your password into the text box, then click Sign In.

After switching to another session type once, the chosen session will become your default session. To switch back, repeat the steps above.

1.1.2 Assistive Tools

In the top right corner, there are status icons and the assistive technologies menu. By clicking the status icons, open a menu that allows you to set the sound volume and restart or power off the machine.

1.2 Desktop Basics

The GNOME desktop appears after you first log in. It displays a panel at the bottom showing the following elements (from left to right):

Applications menu

Click Applications in the left corner to open a menu with all the installed programs. These are classified under different categories for a better overview. Sub-items open automatically when you place the mouse above them.

Click Activities Overview in the bottom part of the Applications menu to open Activities Overview where you can start programs and manage those already running.

The Activity Overview is described further in Section 1.2.1, “Activities Overview”.

Places menu

Click Places to open a menu with shortcuts to your personal directories, connected storage media, and network resources.

Task switcher

All applications currently open on the desktop (on the active workspace) appear in the middle part of the panel. You can bring these applications to the foreground by clicking their names.

Notification indicator (not always visible)

When there are notifications, for example, for new chat or e-mail messages or concerning system updates, an indicator will appear. The indicator is a blue circle with the number of available notifications displayed in the middle. Click the indicator to open the Message Tray where you can interact with all the notifications.

Workspace switcher

This menu lets you select a workspace (also called a virtual desktop) to work on. This feature can help you work with many windows. For example, you could move windows needed for one project to workspace 1 and windows needed for another project to workspace 2.

Date and time

The current day of the week and time are shown to the right from the workspace switcher. Click it to open a menu where you can access a calendar and adjust date and time settings.

Status icons

In the right corner of the panel, icons showing the current status of the network connection, sound volume and power/battery status are displayed.

Click the icons to open a menu where you can adjust sound volume, display brightness, network connection, and power settings. Click the name to display the options for logging out or for switching to another user.

The three icons in the lower part of the menu allow you to, from left to right, open the GNOME settings dialog, lock the screen, and power off or restart your computer.

GNOME Desktop—SLE Classic
Figure 1.3: GNOME Desktop—SLE Classic

1.2.1 Activities Overview

Activities Overview is a full screen mode that comprises all the ways in which you can switch from one activity to another. It shows previews of all open windows and icons for favorite and running applications. It also integrates searching and browsing functionality.

1.2.1.1 Opening the Activities Overview

There are multiple ways to open the Activities Overview:

  • Open the Applications menu on the bottom panel and select Activities Overview.

  • Press Meta.

  • Forcefully move the pointer to the top left corner (the so-called hot corner).

1.2.1.2 Using the Activities Overview

In the following, the most important parts of the Activities Overview are explained.

Dash

The Dash is the bar positioned on the center left. It contains favorite applications and all applications with open windows. If you move the mouse pointer over one of the icons, GNOME will display the name of the corresponding application nearby. A light glow indicates that the application is running and has at least one open window.

Right-clicking an icon opens a menu which offers different actions depending on the associated program. Using Add to Favorites, you can place the application icon permanently in Dash. To remove a program icon from Dash, select Remove from Favorites. To rearrange an icon, use the mouse to drag it to a new position.

Search box

On the top, there is a search box that you can use to find applications, settings and files in your home directory.

To search, you do not need to click the search box. You can begin typing directly after opening Activity Overview. Search starts immediately, you do not need to press Enter.

Workspace selector

On the right, there is an overview of available workspaces. To switch to the selected desktop, click the preview of it.

To move a window from one workspace to another, drag a window preview from one workspace preview to another.

1.2.2 Starting Programs

To start a program, you have several options:

  • In the bottom panel, click Applications and select the desired program from the hierarchical menu.

  • Open the Activities Overview by pressing Meta. Now click an application icon or search for an application. If you do not know the exact application name, you can search for generic category names such as image editor.

    Further information about the activities overview can be found in Section 1.2.1, “Activities Overview”.

  • If you know the exact command to start the program, you can press AltF2, enter the command into the dialog and press Enter.

    Note that the only button displayed in the window is labeled Close and will indeed close the window.

1.3 Pausing or Finishing Your Session

When you have finished using the computer, there are multiple ways to finish the session. Which one is right in a given situation depends on how long you will be away and whether you are worried about energy consumption, among other things.

  • Locking the Computer.  Pause your session, but keep the computer on. Make sure that nobody can look at or change your work while you are away on a break. Other users can log in and work in the meantime. Other users can shut down the computer, but a prompt will warn them that you are still logged in.

  • Logging Out.  Finish the current session, but leave the computer on, so other users can log in.

  • Shutting Down.  Finish the current session and turn off the computer.

  • Restarting.  Finish the current session and restart the computer. Restarting is necessary to apply some system updates.

  • Suspending the Computer.  Pause your session and put the computer in a state where it consumes a minimal amount of energy. Suspend mode can be configured to lock your screen, so nobody can look at or change your work. Waking up the computer is generally much quicker than a full computer start.

    This mode is also known as suspend-to-RAM, sleep or standby mode.

1.3.1 Locking the Screen

To lock the screen, click the status icons on the right of the main panel and click the padlock icon.

When you lock your screen, at first a curtain with a clock will appear. After some time the screen turns black. To unlock the screen, move the mouse or press a key to display the locked screen dialog. Enter your password, then press Enter to unlock the screen.

1.3.2 Logging Out or Switching Users

  1. Click the status icons on the right of the main panel to open the menu.

  2. Click your user name.

  3. Select one of the following options:

    Log Out

    Logs you out of the current session and returns you to the Login screen.

    Switch User

    Suspends your session, allowing another user to log in and use the computer.

    Account Settings

    Takes you to the user settings where you can change your password.

1.3.3 Restarting or Shutting Down the Computer

  1. Click the status icons on the right of the main panel to open the menu.

  2. Click the power off icon in the lower right part of the menu.

  3. Select one of the following options:

    Power Off

    Logs you out of the current session, then turns off the computer.

    Restart

    Logs you out of the current session, then restarts the computer.

1.3.4 Suspending the Computer

  1. Click the status icons on the right of the main panel to open the menu.

  2. Hold Alt pressed. The power off icon in the lower right part of the menu turns into a pause icon. Click the pause icon.

2 Working with Your Desktop

  • Filename: gnome_use.xml
  • ID: cha.gnomeuser.use

In this chapter you will learn how to work with files and burn CDs. You will also find out how to perform regular tasks with your desktop.

2.1 Managing Files and Directories

You can open GNOME Files in multiple ways:

  • Click Applications › Accessories › Files.

  • Open the Activities Overview and search for files.

  • On the desktop, double-click Home.

  • Open the Places menu and select any entry, such as Home.

File Manager
Figure 2.1: File Manager

The elements of the GNOME Files window include the following:

Toolbar

The toolbar contains back and forward buttons, the path bar, a search function, elements to let you change the layout of the content area, and the application menu.

Menu

The menu is the last icon on the toolbar. It lets you perform many tasks, such as opening the preferences dialog, creating a new directory or opening a new window or tab.

Sidebar

The sidebar lets you navigate between often-used directories and external or network storage devices. To display or hide the sidebar, press F9.

Content Area

Displays files and directories.

Use the icons in the top right part of the window to switch between list and grid icon view.

Context Menus

Open a context menu by right-clicking inside the content area. The items in this menu depend on where you right-click.

For example, if you right-click a file or directory, you can select items related to the file or directory. If you right-click the background of a content area, you can select items related to the display of items in the content area.

Floating Statusbar

The floating statusbar appears when a file is selected. It displays the file name and size.

2.1.1 Key Combinations

The following table lists a selection of key combinations of GNOME Files.

Table 2.1: GNOME Files Key Combinations

Key Combination

Description

Alt/ Alt

Go backward/go forward.

Alt

Open the parent directory.

, , ,

Select an item.

Alt or Enter

Open an item.

AltEnter

Open an item's Properties dialog.

ShiftAlt

Open an item and close the current directory.

CtrlL

Transform the path bar from a button view to a text box.

Exit this mode by pressing Enter (go to the location) or Esc (to remain in the current directory).

/

Transform the path bar from a button view to a text box and replace the current path with /.

AltHome

Open your home directory.

Any number or letter key

Start a search within the current directories and their subdirectories. The character you pressed is used as the first character of the search term. Search happens as you type, you do not need to press Enter.

CtrlT

Start a search within the current directories and their subdirectories. The character you pressed is used as the first character of the search term. Search happens as you type, you do not need to press Enter.

Del

Moves the selected file or directory to the trash, from which it can be restored with Undo.

2.1.2 Compressing Files or Directories

Sometimes, it is useful to archive or compress files, for example:

  • You want to attach an entire directory, including its subdirectories, to an e-mail.

  • You want to attach a large file to an e-mail.

  • You want to save space on your hard disk and have files you rarely use.

In all these cases, you can create a compressed file, such as a ZIP file, which can contain multiple original files. How much smaller the compressed version is than the original depends on the file type. Many video, image and office document formats are already compressed and will only become marginally smaller.

  1. In the GNOME Files content area, right-click the directory you want to archive, then click Compress.

  2. Accept the default archive file name or provide a new one.

  3. Select a file extension from the drop-down box.

    • .zip files are supported on most operating systems, including Windows*.

    • .tar.gz files are compatible with most Linux* and Unix* systems.

    • .7z files usually offer better compression ratios than other formats, but are not as widely supported.

  4. Specify a location for the archive file, then click Create.

To extract an archived file, right-click the file, then select Extract Here. You can also double-click the compressed file to open it and see which files are included.

For more information on compressed files, see Section 2.10, “Creating, Displaying, and Decompressing Archives”.

2.1.3 Burning a CD/DVD

If your system has a CD or DVD writer, you can use GNOME Files to burn CDs and DVDs. If you want to burn an audio CD or need more control over the result, see Chapter 20, Brasero: Burning CDs and DVDs.

  1. Open GNOME Files.

  2. Insert a blank medium.

  3. Find the files you want to add to the medium and drag them to the sidebar item called Blank CD-R Disc. (The label may read slightly differently, depending on the type of medium you inserted.) When your mouse pointer is over the sidebar item, a small + should appear next to the pointer.

  4. When you have dragged all files onto the sidebar item Blank CD-R Disc, click it.

  5. Provide a name next to Disc Name or keep the proposal.

  6. Click Write to Disc.

  7. In the appearing dialog CD/DVD Creator, make sure the right medium is selected. Then click Burn.

    The files are burned to the disc. This can take a few minutes, depending on the amount of data being burned and the speed of your burner.

  8. After the medium has been burned, it will be ejected from the drive. In the window CD/DVD Creator, you can click Close.

To burn an ISO disc image, first insert a medium, then double-click the ISO file in GNOME Files. In the dialog Image Burning Setup, click Burn.

2.1.4 Creating a Bookmark

Use the bookmarks feature in GNOME Files to quickly jump to your favorite directories from the sidebar.

  1. Switch to the directory for which you want to create a bookmark in the content area.

  2. Click the list icon, then select Bookmark this Location from the menu.

    The bookmark now appears in the sidebar, with the directory name as the bookmark name.

  3. (Optional) If you want, you can change the name of the bookmark. This does not affect the name of the bookmarked directory itself. To change the name, right-click the new sidebar item and select Rename.

  4. (Optional) If you want, you can change the order in which the bookmarks are displayed. To reorder, click a bookmark and drag it to the desired location.

To switch to a bookmarked directory, click the appropriate sidebar item.

2.1.5 Accessing Remote Files

You can use GNOME Files to access files on remote servers. For more information, see Chapter 5, Accessing Network Resources.

2.2 Accessing Removable Media

To access CDs/DVDs or flash disks, insert or attach the medium. An icon for the medium is automatically created on the desktop. For many types of removable media, a GNOME Files window pops up automatically. If GNOME Files does not open, double-click the icon for that drive on the desktop to view the contents. In GNOME Files, you will see an item for the medium in the sidebar.

Warning
Warning: Unmount to Prevent Data Loss

Do not physically remove flash disks immediately after using them. Even when the system does not indicate that data is being written, the drive may not be finished with a previous operation.

In the sidebar of GNOME Files, click the Eject icon next to the medium to safely remove or unmount the drive.

2.3 Searching for Files

There are multiple ways to search for files or directories. In all cases, the search will be performed on file and directory names. Searching by file size, modification date and other properties is only partially possible in the preinstalled graphical tools. Such searches are easier to do on the command line.

Using GNOME Files

In GNOME Files, navigate to the directory from which you want to start the search. Then start typing the search term. To search for objects with a certain modification date or file type, click the arrow-down icon of the search box and modify the properties.

Using the Activities Overview

Open the Activities Overview by pressing Meta. Then start typing the search term. The search will be performed within your home directory.

Using the Desktop Search application

Click Applications › Accessories › Desktop Search. Enter the search term in the text box Search. The search will be performed within your home directory.

2.4 Copying Text Between Applications

Copy and paste works the same as in other operating systems. First select the text, so that it appears highlighted, usually in blue. Then press CtrlC. Now move the keyboard focus to the right position. Finally, to insert the text, press CtrlV.

To copy or paste in the terminal, additionally press Shift together with the above key combinations.

An alternative way of using copy and paste is described in the following. First select the text. To paste the text, middle-click over the position where you want the text to be pasted. As soon as you make another selection, the text from the original selection will be replaced in the clipboard.

When copying information between programs, you must keep the source program open and paste the text before closing it. When a program closes, any content from that application that is on the clipboard is lost.

2.5 Managing Internet Connections

To surf the Web or send and receive e-mail messages, you must have configured an Internet connection. If you have installed SUSE Linux Enterprise Desktop on a laptop or a mobile device, NetworkManager is enabled by default. On the GNOME desktop, you can then establish Internet connections with NetworkManager as described in Section 30.3, “Configuring Network Connections”.

Depending on your environment, you can choose in YaST which basic service to use for setting up network connections (either NetworkManager or wicked). For details, see Section 17.4.1.1, “Configuring Global Networking Options”.

2.6 Exploring the Internet

The GNOME desktop includes Firefox, a Mozilla*-based Web browser. You can start it by clicking Applications › Internet › Firefox.

You can type an address into the location bar at the top or click links in a page to move to different pages, like in any other Web browser.

For more information, see Chapter 14, Firefox: Browsing the Web.

2.7 E-mail and Scheduling

For reading and managing your mail and events, use Evolution. Evolution is a groupware program that makes it easy to store, organize and retrieve your personal information.

Evolution seamlessly combines e-mail, a calendar, an address book, and a memo and task list in one easy-to-use application. With its extensive support for communications and data interchange standards, Evolution can work with existing corporate networks and applications, including Microsoft* Exchange.

To start Evolution, click Applications › Internet › Evolution.

The first time you start Evolution, it prompts you with a few questions to set up a mail account and import mail from an old mail client. Then it shows you how many new messages you have and lists upcoming appointments and tasks. The calendar, address book and mail tools are available in the shortcut bar on the left.

For more information, see Chapter 15, Evolution: E-Mailing and Calendaring.

2.8 Opening or Creating Documents with LibreOffice

For creating and editing documents, LibreOffice is installed with the GNOME desktop. LibreOffice is a complete set of office tools that can both read and save Microsoft Office file formats. LibreOffice has a word processor, a spreadsheet, a database, a drawing tool and a presentation program.

To start LibreOffice, click Applications › Office › LibreOffice.

For more information, see Chapter 10, LibreOffice: The Office Suite.

2.9 Controlling Your Desktop’s Power Management

To see the state of the computer battery on your laptop, check the battery icon in the right part of the panel. On certain events, such as a critically low battery state, GNOME will display notifications informing you about the event.

You can open the power settings via Applications › System Tools › Settings › Power.

For more information, see Section 3.3.2, “Configuring Power Settings”.

2.10 Creating, Displaying, and Decompressing Archives

You can use the Archive Manager application (also known as File Roller) to create, view, modify or unpack an archive. An archive is a file that acts as a container for other files. An archive can contain many files, directories and subdirectories, usually in compressed form. Archive Manager supports common formats such as zip, tar.gz, tar.bz2, lzh, and rar. You can use Archive Manager to create, open and extract a compressed non-archive file.

To start Archive Manager, click Applications › Utilities › Archive Manager.

If you already have a compressed file, double-click the file name in GNOME Files to view the contents of the archive in Archive Manager.

Archive Manager
Figure 2.2: Archive Manager

2.10.1 Opening an Archive

  1. In Archive Manager, click Open.

  2. Select the archive you want to open.

  3. Click Open.

    Archive Manager displays the following:

    • The archive name in the titlebar.

    • The archive contents in the content area.

    To open another archive, click Open again. Archive Manager opens each archive in a new window. To open another archive in the same window, you must first select Close from the menu in the right part of the window to close the current archive, then click Open.

    If you try to open an archive that was created in a format that Archive Manager does not recognize, the application displays an error message.

  4. To display the archive's properties, click the last icon in the titlebar and select Properties. Details like name, location, type, last modification, number of files, size, and compression ratio are shown.

2.10.2 Extracting Files from an Archive

  1. In Archive Manager, select the files that you want to extract.

  2. Click Extract.

  3. Specify the directory where Archive Manager will extracts the files.

  4. Choose from the following extraction options:

    Option

    Description

    All files

    Extracts all files from the archive.

    Selected files

    Extracts the selected files from the archive.

    Files

    Extracts from the archive all files that match the specified pattern.

    Keep directory structure

    Reconstructs the directory structure when extracting the specified files.

    For example, you specify /tmp in the Filename text box and extract all files. The archive contains a subdirectory called doc. If you select the Keep directory structure option, Archive Manager extracts the contents of the subdirectory to /tmp/doc.

    If you do not select the Keep directory structure option, Archive Manager does not create any subdirectories. Instead, it extracts all files from the archive, including files from subdirectories, to /tmp.

    Do not overwrite newer files

    If not active, the Archive Manager overwrites any files in the destination directory that have the same name as the specified files.

    If you select this option, Archive Manager does not extract the specified file if an existing file with the same name already exists in the destination directory.

  5. Click Extract.

    To extract an archived file in a file manager window without opening Archive Manager, right-click the file and select Extract Here.

    The Extract operation extracts a copy of the specified files from the archive. The extracted files have the same permissions and modification date as the original files that were added to the archive.

    The Extract operation does not change the contents of the archive.

2.10.3 Creating Archives

  1. In Archive Manager, click the main menu icon in the top left part of the window and select New Archive.

  2. Specify the name and location of the new archive.

  3. Select an archive type from the drop-down box.

  4. Click Create.

    Archive Manager creates an empty archive, but does not yet write the archive to disk. Archive Manager writes a new archive to disk only when the archive contains at least one file. If you create a new archive and quit Archive Manager before you add any files to the archive, the archive will be deleted.

  5. Add files and directories to the new archive:

    1. Click Add Files and select the files or directories you want to add.

    2. Click Add.

      Archive Manager adds the files to the current directory in the archive.

You can also add files to an archive in a file manager window without opening Archive Manager. See Section 2.1.2, “Compressing Files or Directories” for more information.

2.11 Taking Screenshots

You can take a snapshot of your screen or of an individual application window by using the Take Screenshots utility. Start it by pressing Print to take a screenshot of the entire desktop or by pressing AltPrint to take a screenshot of the currently active window or dialog.

The screenshots are automatically saved to your ~/Pictures directory.

You can also use GIMP to take screenshots. (For more information on GIMP, see Chapter 18, GIMP: Manipulating Graphics). In GIMP, click File › Create › Screenshot, select an area, choose a delay and then click Snap.

2.12 Viewing PDF Files

Documents that need to be shared or printed across platforms can be saved as PDF (Portable Document Format) files. Document Viewer (also known as Evince) can open PDF files and many similar file types, such as XPS, DjVu, or TIFF.

Note
Note: Rare Display Issues

In rare cases, documents will not be displayed correctly in Document Viewer. This can happen, for example, with certain forms, animations or 3D images. In such cases, ask the authors of the file what viewer they recommend. However, in some cases the recommended viewer will not work on Linux.

Document Viewer
Figure 2.3: Document Viewer

To open Document Viewer, double-click a PDF file in a file manager window. Document Viewer will also open when you download a PDF file from a Web site. To open Document Viewer without a file, select Applications › Office › Document Viewer.

To view a PDF file in Document Viewer, click the cog wheel icon to open the menu and select Open. Now locate the desired PDF file and click Open.

Use the navigation icons at the top of the window or the thumbnails in the left panel to navigate through the document. If your PDF document provides bookmarks, you can access them in the left panel of the viewer.

2.13 Obtaining Software Updates

When you connect to the Internet, the updater applet automatically checks whether software updates for your system are available. When important updates are available, you will receive a notification on your desktop.

For detailed information on how to install software updates with the updater applet and how to configure it, refer to the chapter about installing and removing software in Section 10.5, “Keeping the System Up-to-date”.

2.14 For More Information

Along with the applications described in this chapter for getting started, you can use many other applications on GNOME. Find detailed information about these applications in the other parts of this manual.

To learn more about GNOME and GNOME applications, see http://www.gnome.org.

To report bugs or add feature requests, go to http://bugzilla.gnome.org.

3 Customizing Your Settings

  • Filename: gnome_custom.xml
  • ID: cha.gnome.settings

You can change the way the GNOME desktop looks and behaves to suit your own personal tastes and needs. Some possible changes of settings are:

These settings and others can be changed in the All Settings dialog.

3.1 The GNOME Settings Dialog

Whereas YaST is a desktop-independent system-wide tool to configure most aspects of your product installation, the settings dialog is a GNOME configuration tool. It focuses on look and feel, personal settings and preferences of your GNOME desktop.

To access the GNOME settings dialog, click Applications › System Tools › Settings. The dialog is divided into the following three categories:

Personal

From here, you can change the background of your desktop or of the lock screen, and configure language settings. For more information, see Section 3.2, “Personal”.

Hardware

Allows you to configure hardware components such as monitors, printers, mouses/touchpads, network adapters and sound devices. You can also change key combination settings and set up power-saving features. For more information, see Section 3.3, “Hardware”.

System

Lets you configure system settings such as date and time, whether to start software when inserting flash disks or whether you want to share your screen with others. You can also set up user accounts. If you want, you can also start YaST from this screen, though it is also available separately from within the menu. For more information, see Section 3.4, “System”.

GNOME Settings Dialog
Figure 3.1: GNOME Settings Dialog

To change some system-wide settings, the control center will prompt you for the root password and start YaST. This is mostly the case for administrator settings (including most of the hardware, the graphical user interface, Internet access, security settings, user administration, software installation and system updates and information). Follow the instructions in YaST to configure these settings. For information about using YaST, refer to the integrated YaST help texts or to the Deployment Guide.

This chapter focuses on individual settings you can change directly in the GNOME settings dialog, without having to use YaST.

3.2 Personal

The following sections introduce examples of how to configure some personal aspects of your GNOME desktop, like your languages used or desktop backgrounds.

3.2.1 Changing the Desktop Background

The desktop background is the image or color that is applied to your desktop. You can also customize the image shown when the screen is locked.

To change the desktop background or the lock screen:

  1. Click Applications › System Tools › Settings › Background.

  2. Click Background or Lock Screen.

  3. Click Wallpapers, Pictures, or Colors.

    Wallpapers are preconfigured images distributed with your system. Pictures are your own images from your Pictures directory (~/Pictures). Colors are predefined colors chosen by GNOME developers.

  4. Choose an option from the list.

  5. When you are satisfied with your choice, click Select.

3.2.2 Configuring Language Settings

SUSE Linux Enterprise Desktop can be configured to use any of several languages. The language setting determines the language of dialogs and menus and can also determine the keyboard and clock layout.

To configure your language settings click Applications › System Tools › Settings › Region and Language.

Here you can choose:

  • Interface language.

  • Date and number formats, currency and related options.

  • Input sources (keyboard layout). For non-alphabetic languages there can be additional settings.

Note
Note: Settings Made Using ibus-setup Do Not Take Effect

On GNOME, settings made using ibus-setup do not take effect. ibus-setup can only be used to configure IceWM. Instead, always use the Settings application:

  • To change input methods, use the panel Region & Language.

  • To change the key combination that switches between input methods, use the panel Keyboard. In it, choose the category Typing and the entry Switch to next input source.

3.3 Hardware

In the following sections you will find examples of how to configure some hardware aspects of your GNOME desktop, including keyboard or mouse preferences, handling of removable drives (and other media) or screen resolution.

3.3.1 Configuring Bluetooth Settings

The Bluetooth module lets you set the visibility of your machine over Bluetooth and connect to available Bluetooth devices. To configure Bluetooth connectivity, follow these steps:

  1. Click Applications › System Tools › Settings › Bluetooth to open the Bluetooth settings module.

  2. To use Bluetooth, turn the Bluetooth switch on.

  3. To make your computer visible over Bluetooth, turn the Visibility switch on. The computer will start searching for other visible Bluetooth devices in the vicinity and display any found devices in the Devices list. At first, the list may be empty.

    Note
    Note: Temporary Visibility

    The Visibility switch is meant to be used only temporarily. You only need to turn it on for the initial setup of a connection to a Bluetooth device. After the connection has been established, turn off the switch.

  4. On the device you want to connect, turn on Bluetooth connectivity and visibility, too.

  5. If the desired device has been found and is shown in the list, click it to establish a connection to it.

    You will be asked whether the PINs of the two devices match.

  6. If the PINs match, confirm this on both your computer and the device.

    Both are now paired. On your computer, the device in the list is shown as Connected.

    Depending on the device type, you can now either see it as a storage device in GNOME Files, set a volume for it in the Sound settings or other things.

To connect to a paired Bluetooth device, select the device in the list. In the dialog that appears, turn the Connection switch on. You can send files to the connected device by using the Send Files button. If you are connected to a device such as a mobile phone, you can use it as a network device by activating the appropriate option.

To remove a connected device from the list on your computer, click Remove Device and confirm your choice. To completely remove the pairing, you also need to do so on your device.

3.3.2 Configuring Power Settings

  1. Click Applications › System Tools › Settings › Power to open the Power settings module.

  2. In the upper part of the dialog, you can see the current state of the battery.

  3. In the Power Saving section of the dialog, set the Screen Brightness to conserve power. You can also set whether to dim the screen after a period of inactivity and set the time interval. You can also set whether to turn off wireless networking after the period of inactivity.

  4. In the Suspend and Power Button section of the dialog, set the Automatic Suspend. When you click it, a separate dialog opens.

    In it, you can turn on automatic suspending and associated time intervals. If you are using a computer with a battery, you can set these separately for computer running on battery power or plugged in.

    You can also set the action performed when the power button is pressed. Choose Hibernate to use a mode where the computer turns off completely but saves your running session to the hard disk. Alternatively, choose Suspend or Nothing.

3.3.3 Modifying Keyboard Shortcuts

To modify keyboard shortcuts click Applications › System Tools › Settings › Keyboard.

Keyboard Dialog
Figure 3.2: Keyboard Dialog

The Keyboard dialog shows the keyboard shortcuts that are configured for your system. Click the categories on the right to view the current shortcuts.

To edit a key combination, first click the row. To set a new key combination, press the keys. To disable a shortcut, press <— instead.

To configure keyboard accessibility options, refer to Section 4.4, “Mobility Impairments”. To configure your keyboard layout, refer to Section 3.2.2, “Configuring Language Settings”.

3.3.4 Configuring the Mouse and Touchpad

To modify mouse and touchpad options, click Applications › System Tools › Settings › Mouse and Touchpad.

Mouse and Touchpad Settings Dialog
Figure 3.3: Mouse and Touchpad Settings Dialog
  • In the General section of the dialog, you can set the Primary button orientation (left or right).

  • In the Mouse section of the dialog, use Mouse Speed to adjust the sensitivity of the mouse pointer.

  • In the Touchpad section of the dialog, you can turn the touchpad on and off. Use Touchpad Speed to adjust the sensitivity of the touchpad pointer. You can also disable the touchpad while typing and enable clicks by tapping the touchpad.

  • To test your settings, click Test Your Settings and try the pointing device.

For configuration of mouse accessibility options, refer to the Section 4.4, “Mobility Impairments”.

3.3.5 Installing and Configuring Printers

The Printers module lets you connect to any available local or remote CUPS server and configure printers.

To start the Printers module, click Applications › System Tools › Settings › Printers. For detailed information, refer to Chapter 6, Managing Printers.

3.3.6 Configuring Screens

To specify resolution and orientation for your screen or to configure multiple screens, click Applications › System Tools › Settings › Displays.

Procedure 3.1: Changing the Settings for a Monitor
  1. To find the right monitor, look for the numbers displayed in the upper left corner of all monitors after you have opened the Display dialog. To set options for a monitor, click the list item of the monitor. A new dialog appears.

  2. If multiple monitors are attached to the computer, the left part of the dialog will allow you to choose how to use the monitor. You can choose between:

    Primary

    The screen that shows the panel and important messages.

    Secondary Display

    A monitor that expands the desktop of the primary monitor.

    Mirror

    A monitor that mirrors the image on the primary monitor. In terms of resolution, the lowest common denominator will be used.

    Turn Off

    A screen that is not used.

    To rotate the displayed image, use the buttons with the arrows pointing left and right. To mirror the displayed image, use the button with the double arrow icon.

    You can set a different resolution by changing the value next to Resolution. Not all resolutions provide a sharp and unstretched image. To find the best resolution for your monitor, refer to its manual.

  3. When you are done, click Apply.

    The monitors will now readjust. This can take multiple seconds during which the screen can be black or distorted.

    Afterward, a confirmation dialog will appear.

  4. If the configuration looks correct, click Keep Changes.

    If the configuration is not what you hoped for, click Revert Settings or wait for 20 seconds. The changes will then be reverted.

Monitor Resolution Settings Dialog
Figure 3.4: Monitor Resolution Settings Dialog
Procedure 3.2: Changing the Arrangement of Multiple Monitors

If you are using multiple screens, set up how they are arranged, so you can use the mouse pointer properly across monitors.

  1. Click Arrange Combined Displays.

  2. To find the right monitor, look for the numbers displayed in the upper left corner of all monitors. Click and drag the monitor image around to move it.

  3. When you are done, click Apply.

  4. If the configuration looks correct, click Keep Changes.

    If the configuration is not what you hoped for, click Revert Settings or wait for 20 seconds. The changes will then be reverted.

3.3.7 Configuring Sound Settings

The Sound tool lets you manage sound devices and set the sound effects. In the top part of the dialog, you can select the general output volume or turn the sound off completely.

To open the sound settings, click Applications › System Tools › Settings › Sound.

Configuring Sound Settings
Figure 3.5: Configuring Sound Settings

3.3.7.1 Configuring Sound Devices

Use the Output tab to select the device for sound output. Below the list, choose the sound device setting you prefer, for example balance.

Use the Input tab to set the input device volume or to mute the input temporarily. If you have more than one sound device, you can also select a default device for audio input in the Choose a device for sound input list.

3.3.7.2 Configuring Sound Effects

Use the Sound Effects tab to configure whether and how you want sound to be played when message boxes appear.

Specify the volume at which the sound effects will be played under Alert volume. You can also turn the effects on and off.

Select the Alert Sound to use.

3.3.8 Networking

To set up networking options, click Applications › System Tools › Settings › Network.

In the appearing dialog, you can configure wired or wireless connections and proxies and VPNs. If you are unsure which network parameters to use, refer to your system administrator.

To learn more about setting up network connections, see Chapter 30, Using NetworkManager.

3.4 System

In the following sections, you will find examples of how to configure some system aspects of your GNOME desktop. These include preferred applications, changing your user password, and session sharing preferences.

To learn more about configuring assistive technologies, see Chapter 4, Assistive Technologies.

3.4.1 Changing Your Password

For security reasons, it is a good idea to change your login password from time to time. To change your password:

  1. Click Applications › System Tools › Settings › Users.

  2. Click the button labeled with dots next to Password.

  3. In the first text box, type your current password.

  4. In the next text box, type a new password.

    You can also click the cog wheel icon at the end of the text box to generate a random password.

  5. Confirm your new password by typing it again in the last text box.

  6. Click Change.

3.4.2 Setting Preferred Applications

  1. To change the default application for various common tasks such as browsing the Internet, sending mails or playing multimedia files, click Applications › System Tools › Settings › Details.

    Preferred Applications
    Figure 3.6: Preferred Applications
  2. Click Default Applications.

  3. Select one of the available applications from the drop-down box. You can choose an application to handle Web, mail, calendar, music, videos or photographs.

3.4.3 Setting Session Sharing Preferences

To open a configuration dialog for sharing a GNOME desktop session between multiple users and set session-sharing preferences, click Applications › System Tools › Settings › Sharing.

Important
Important: Sharing Desktop Sessions Affects System Security

Sharing desktop sessions can be a security risk. Use the restriction options available.

Before you can share anything, you need to turn on the switch in the upper part of the dialog. The switch also helps you if you quickly need to disable all sharing options.

  • To share your public directory over the network, click Personal File Sharing and turn on Share Public Folder On This Network. You can also set a password.

  • To share your desktop session with other users, click Screen Sharing and activate Allow Remote Control. To allow other users to control your screen, activate also Remote Control. You can also set a password.

  • To enable logging in via SSH, click Remote Login.

All the sharing screens contain an address which you can give to other users, so they can reach you. To copy a sharing address, click it and select Copy. You can then paste it into an e-mail or messaging software.

3.4.4 Configuring Administrative Settings with YaST

For your convenience, YaST is available from the GNOME Settings as well as from the Applications menu. For information about using YaST, refer to Deployment Guide.

4 Assistive Technologies

  • Filename: gnome_accessibility.xml
  • ID: cha.gnome.accessibility
Abstract

The GNOME desktop includes assistive technologies to support users with various impairments and special needs, and to interact with common assistive devices. This chapter describes several assistive technology applications designed to meet the needs of users with physical disabilities like low vision or impaired motor skills.

4.1 Enabling Assistive Technologies

To configure accessibility features, open the GNOME Settings dialog (for example using Applications › System Tools › Settings) and click Universal Access. Each assistive feature can be enabled separately using this dialog.

If you need a more direct access to individual assistive features, turn on Always Show Universal Access Menu in the Universal Access dialog. A new menu will appear on the bottom panel.

4.2 Visual Impairments

In the Seeing section of the Universal Access dialog, you can enable features that help people with impaired vision.

  • Turning on High Contrast enables high contrast black and white icons in the GNOME desktop.

  • Turning on Large Text enlarges the font used in the user interface.

  • Turning on Zoom enables a screen magnifier. You can set the desired magnification and magnifier behavior, including color effects.

  • If the Screen Reader is turned on, any UI element or text that receives keyboard focus is read aloud.

  • If the Sound Keys are turned on, a sound is played whenever Num Lock or Caps Lock are turned on.

4.3 Hearing Impairments

In the Hearing section of the Universal Access dialog, you can enable features helping people with impaired hearing.

If the Visual Alerts are turned on, a window title or the entire screen is flashed when an alert sound occurs.

4.4 Mobility Impairments

In the Typing and Pointing and Clicking sections of the Universal Access dialog, you can enable features that help people with mobility impairments.

  • If the Screen Keyboard is turned on, a virtual keyboard appears whenever you need to enter text. You can use the screen keyboard by clicking the virtual keys.

  • Click Typing Assist (AccessX) to open a dialog where you can enable various features that make typing easier.

    • With Enable by Keyboard, you can turn accessibility features on or off by using the keyboard.

    • Sticky Keys allows you to type key combinations one key at a time rather than having to hold down all of the keys at once. For example, the Alt→| shortcut switches between windows.

      With sticky keys turned off, you need to hold down both keys at the same time. With sticky keys turned on, press Alt and then →| to do the same.

    • Turn on Slow Keys if you want a delay between pressing a key and the letter being displayed on the screen. This means that you need to hold down each key you want to type for a little while before it appears. Use slow keys if you accidentally press several keys at a time when you type, or if you find it difficult to press the right key on the keyboard first time.

    • Turn on Bounce Keys to ignore key presses that are rapidly repeated. This can help, for example, if you have hand tremors which cause you to press a key multiple times when you only want to press it once.

  • Turn on Mouse Keys to control the mouse pointer using the numeric keypad on your keyboard.

  • Click Click Assist to open a dialog where you can enable various features that make clicking easier: simulated secondary click and hover click.

    • Turn on Simulated Secondary Click to activate the secondary click (usually the right mouse button) by holding down the primary button for a predefined Acceptance delay. This is useful if you find it difficult to move your fingers individually on one hand, or if your pointing device only has a single button.

    • Turn on Hover Click to trigger a click by hovering your mouse pointer over an object on the screen. This is useful if you find it difficult to move the mouse and click at the same time. If this feature is turned on, a small Hover Click window opens and stays above all of your other windows. You can use this to choose what sort of click should happen when you hover. When you hover your mouse pointer over a button and do not move it, the pointer gradually changes color. When it has fully changed color, the button will be clicked.

  • Use the slider to adjust the Double-Click Delay according to your needs.

4.5 For More Information

You can find further information in the GNOME help, which is also available online at https://help.gnome.org/users/gnome-help/3.20/a11y.html.en.

Part II Connectivity, Files and Resources

5 Accessing Network Resources

From your desktop, you can access files and directories or certain services on remote hosts or make your own files and directories available to other users in your network. SUSE® Linux Enterprise Desktop offers the following ways of accessing and creating network shared resources.

6 Managing Printers

SUSE® Linux Enterprise Desktop makes it easy to print your documents, whether your computer is connected directly to a printer or linked remotely on a network. This chapter describes how to set up printers in SUSE Linux Enterprise Desktop and manage print jobs.

7 Backing Up User Data

The Backup tool is a simple framework to let users back up and restore their own data such as home directories or selected files. It is possible to create scheduled backups or backups on request, and to play back a previous state of this data.

8 Passwords and Keys: Signing and Encrypting Data

The GNOME Passwords and Keys program is an important component of the encryption infrastructure on your system. With this program, you can create and manage PGP and SSH keys, import, export and share keys, back up your keys and keyring, and cache your passphrase.

9 gFTP: Transferring Data from the Internet

gFTP is a multithreaded file transfer client. It supports the FTP, FTPS (control connection only), HTTP, HTTPS, SSH, and FSP protocols. Furthermore, it allows the transfer of files between two remote FTP servers via FXP. To start gFTP, click Applications › Internet › gFTP.

5 Accessing Network Resources

  • Filename: gnome_networking.xml
  • ID: cha.gnome.network

From your desktop, you can access files and directories or certain services on remote hosts or make your own files and directories available to other users in your network. SUSE® Linux Enterprise Desktop offers the following ways of accessing and creating network shared resources.

Network Browsing

Your file manager, GNOME Files, lets you browse your network for shared resources and services. Learn more about this in Section 5.3, “Accessing Network Shares”.

Sharing Directories in Mixed Environments

Using GNOME Files, configure your files and directories to share with other members of your network. Make your data readable or writable for users from any Windows or Linux workstation. Learn more about this in Section 5.4, “Sharing Directories”.

Managing Windows Files

SUSE Linux Enterprise Desktop can be configured to integrate into an existing Windows network. Your Linux machine then behaves like a Windows client. It takes all account information from the Active Directory domain controller, just as the Windows clients do. Learn more about this in Section 5.5, “Managing Windows Files”.

Configuring and Accessing a Windows Network Printer

You can configure a Windows network printer through the GNOME control center. Learn how to do this in Section 5.6, “Configuring and Accessing a Windows Network Printer”.

5.1 Connecting to a Network

You can connect to a network with wired and wireless connections. To view your network connection, check the network icon in the right part of the main panel. If you click the icon, you can see more details in the menu. Click the connection name to see more details and access the settings.

To learn more about connecting to a network, see Chapter 30, Using NetworkManager.

5.2 General Notes on File Sharing and Network Browsing

Important
Important: Contact Your Administrator Before Setup

Whether and to what extent you can use file sharing and network browsing and in your network highly depends on the network structure and on the configuration of your machine.

Before setting up either of them, contact your system administrator. Check whether your network structure supports a feature and whether your company's security policies permit it.

Network browsing, be it SMB browsing for Windows shares or SLP browsing for remote services, relies heavily on the machine's ability to send broadcast messages to all clients in the network. These messages and the clients' replies to them enable your machine to detect any available shares or services.

For broadcasts to work effectively, your machine must be part of the same subnet as all other machines it is querying. If network browsing does not work on your machine or the detected shares and services do not meet your expectations, contact your system administrator to ensure that you are connected to the appropriate subnet.

To allow network browsing, your machine needs to keep several network ports open to send and receive network messages that provide details on the network and the availability of shares and services. The standard SUSE Linux Enterprise Desktop is configured for tight security and has a firewall that protects your machine against the Internet.

To adjust the firewall configuration, you either need to ask your system administrator to put your interface into the internal zone or to tear down the firewall entirely (depending on your company's security policy). If you try to browse a network while a restrictive firewall is running on your machine, GNOME Files warns you that your security restrictions are not allowing it to query the network.

5.3 Accessing Network Shares

Networking workstations can be set up to share directories. Typically, files and directories are marked to allow users remote access. These are called network shares. If your system is configured to access network shares, you can use your file manager to access these shares and browse them just as easily as if they were located on your local machine. Your level of access to the shared directories (whether read-only or write access, as well) is dependent on the permissions granted to you by the owner of the shares.

To access network shares, open GNOME Files and click Other Locations in the sidebar. GNOME Files displays the servers and networks that you can access. Double-click a server or network to access its shares. You might be required to authenticate to the server by providing a user name and password. Common network shares are SFTP-accessible resources (SSH File Transfer Protocol) or Windows shares.

Network File Browser
Figure 5.1: Network File Browser
Procedure 5.1: Adding a Network Place
  1. Open GNOME Files and click Other Locations in the sidebar. It shows a Connect to Server text box.

  2. Enter the server address.

  3. Click Connect.

5.4 Sharing Directories

Sharing and exchanging documents is a must-have in corporate environments. GNOME Files offers you file sharing, which makes your files and directories available to both Linux and Windows users.

5.4.1 Enabling Sharing on the Computer

Before you can share a directory, you must enable sharing on your computer. To enable sharing:

  1. Start YaST from the main menu.

  2. Enter the root password.

  3. In the category Network Services, click Windows Domain Membership.

  4. Click Allow Users to Share Their Directories, then click OK.

5.4.2 Enabling Sharing for a Directory

To configure file sharing for a directory:

  1. Open GNOME Files.

  2. Right-click a directory, select Properties and click Share.

  3. Select Share this folder.

  4. If you want other people to be able to write to the directory, select Allow others to create and delete files in this folder. To allow access for people without a user account check Guest Access.

  5. Click Create Share.

  6. If the directory does not already have the permissions that are required for sharing, a dialog appears. Click Add the permissions automatically.

The directory icon changes to indicate that the directory is now shared.

Important
Important: Samba Domain Browsing and Firewalls

Samba domain browsing only works if your system's firewall is configured accordingly. Either disable the firewall entirely or assign the browsing interface to the internal firewall zone. Ask your system administrator how to proceed.

5.5 Managing Windows Files

With your SUSE Linux Enterprise Desktop machine being an Active Directory client, you can browse, view and manipulate data located on Windows servers. The following examples are the most prominent ones:

Browsing Windows Files with GNOME Files

Use GNOME Files's network browsing features to browse your Windows data.

Viewing Windows Data with GNOME Files

Use GNOME Files to display the contents of your Windows user directory as you would for displaying a Linux directory. Create new files and directories on the Windows server.

Manipulating Windows Data with GNOME Applications

Many GNOME applications allow you to open files on the Windows server, manipulate them and save them back to the Windows server.

Single Sign-On

GNOME applications, including GNOME Files, support Single Sign-On. This means that you do not need to re-authenticate when you access other Windows resources. These can be Web servers, proxy servers or groupware servers like Microsoft Exchange*. Authentication against all these is handled silently in the background using the user name and password you provided when you logged in.

To access your Windows data using GNOME Files, proceed as follows:

  1. Open GNOME Files and click Network in the Places pane.

  2. Double-click Windows Network.

  3. Double-click the icon of the workgroup containing the computer you want to access.

  4. Click the computer’s icon (and authenticate if prompted to do so) and navigate to the shared directory on that computer.

To create directories in your Windows user directory using GNOME Files, proceed as you would when creating a Linux directory.

5.6 Configuring and Accessing a Windows Network Printer

Being part of a corporate network and authenticating against a Windows Active Directory server, you can access corporate resources such as printers. GNOME allows you to configure printing from your Linux client to a Windows network printer.

To configure a Windows network printer for use through your Linux workstation, proceed as follows:

  1. Start the GNOME control center from the main menu by clicking Applications › System Tools › Settings › Printers.

    Note
    Note: Starting the CUPS Service

    The CUPS service is not started by default after installation of SUSE Linux Enterprise Desktop. If the Printers dialog shows a message that the printing service is currently not available, you need to start the CUPS service manually.

    Start it by opening a shell and typing:

    sudo systemctl start cups
  2. Click Unlock and enter the root password.

  3. Click the plus icon.

  4. Select a Windows printer connected via Samba.

To print to the Windows network printer configured above, select it from the list of available printers.

6 Managing Printers

  • Filename: gnome_print.xml
  • ID: cha.gnome.print

SUSE® Linux Enterprise Desktop makes it easy to print your documents, whether your computer is connected directly to a printer or linked remotely on a network. This chapter describes how to set up printers in SUSE Linux Enterprise Desktop and manage print jobs.

6.1 Installing a Printer

Before you can install a printer, you need to know the root password and have your printer information ready. Depending on how you connect the printer, you might also need the printer URI, TCP/IP address or host, and the driver for the printer. A number of common printer drivers ship with SUSE Linux Enterprise Desktop. If you cannot find a driver for the printer, check the printer manufacturer's Web site.

  1. Click Applications › System Tools › Settings › Printers.

  2. Click Unlock and enter the root password.

  3. Click the plus icon.

  4. If there are too many printers in the list, filter them by entering an IP address or a keyword into the search field in the lower part of the dialog.

  5. Select a printer from the list of available printers and click Add.

The installed printer appears in the Printers panel. You can now print to the printer from any application.

7 Backing Up User Data

  • Filename: backup.xml
  • ID: cha.userbackup
Abstract

The Backup tool is a simple framework to let users back up and restore their own data such as home directories or selected files. It is possible to create scheduled backups or backups on request, and to play back a previous state of this data.

7.1 Creating Backups

First schedule which data you want to back up and when to do it.

  1. Click Applications › System Tools › Backup.

  2. If you are opening the application for the first time, you will see a screen welcoming you. Click Just show my backup settings.

  3. On the Overview tab you can turn the Automatic backups on and off. You can also see the overview of the current settings.

  4. On the Storage tab, select a Backup Location and a Folder to which the backup should be written.

  5. On the Folders tab select the directories to back up and directories to ignore. For example, if you want to back up your home directory except for the Downloads directory, add your home directory to the category Folders to back up and your Downloads directory to the category Folders to ignore.

  6. On the Schedule tab select how often to perform the automatic backups (daily or weekly) and how long to keep the backups.

  7. (Optional) If you want to perform a backup immediately, too, switch back to the Overview tab and click Back Up Now.

    1. Choose whether you want the backup to be password-protected.

      If so, type a password in the two text boxes next to Encryption Password and Confirm Password.

      If not, click Allow Restoring Without a Password.

    2. Click Continue to start the backup process. When the backup is finished, the window will close.

7.2 Restoring Data

To restore a previous state of your data, proceed as follows:

  1. Select Applications › System Tools › Backup.

  2. On the Overview tab, click Restore.

  3. Choose the location from which to restore. Click Forward. The tool searches for backups stored in that location.

  4. Choose a date. Click Forward.

  5. Choose whether to restore the files to the original location or to another directory. Click Forward to see a summary of your choices.

  6. Click Restore to start the restoration process.

8 Passwords and Keys: Signing and Encrypting Data

  • Filename: gnome_crypto.xml
  • ID: cha.gnome.crypto

The GNOME Passwords and Keys program is an important component of the encryption infrastructure on your system. With this program, you can create and manage PGP and SSH keys, import, export and share keys, back up your keys and keyring, and cache your passphrase.

Start the program by choosing Applications › Utilities › Passwords and Keys

Password and Keys Main Window
Figure 8.1: Password and Keys Main Window

8.1 Signing and Encryption

Signing.  Attaching electronic signatures to pieces of information, such as e-mail messages or software to prove its origin. To keep someone else from writing messages using your name, and to protect both you and the people you send them to, you should sign your mails. Signatures help you check the sender of the messages you receive and distinguish authentic messages from malicious ones.

Software developers sign their software so that you can check the integrity. Even if you get the software from an unofficial server, you can verify the package with the signature.

Encryption.  You might also have sensitive information you want to protect from other parties. Encryption helps you transform data and make it unreadable for others. This is important for companies so they can protect internal information and their employees' privacy.

8.2 Generating a New Key Pair

To exchange encrypted messages with other users, you must first generate your own pair of keys. It consists of two parts:

  • Public Key.  This key is used for encryption. Distribute it to your communication partners, so they can use it to encrypt files or messages for you.

  • Private Key.  This key is used for decryption. Use it to make encrypted files or messages from others (or yourself) legible again.

Important
Important: Access to the Private Key

If others gain access to your private key, they can decrypt files and messages intended only for you. Never grant others access to your private key.

8.2.1 Creating OpenPGP Keys

OpenPGP is a non-proprietary protocol for encrypting e-mail with the use of public-key cryptography based on PGP. It defines standard formats for encrypted messages, signatures, private keys, and certificates for exchanging public keys.

  1. Click Applications › Utilities › Passwords and Keys.

  2. Click File › New.

  3. Select PGP Key and click Continue.

  4. Specify your full name and e-mail address.

  5. Click Advanced key options to specify the following advanced options for the key.

    Comment

    An optional comment.

    Encryption Type

    Specifies the encryption algorithms used to generate your keys. DSA ElGamal is the recommended choice because it lets you encrypt, decrypt, sign, and verify as needed. Both DSA (sign only) and RSA (sign only) allow only signing.

    Key Strength

    Specifies the length of the key in bits. The longer the key, the more secure it is (provided a strong passphrase is used). Keep in mind that performing any operation with a longer key requires more time than it does with a shorter key. Acceptable values are between 1024 and 4096 bits. At least 2048 bits are recommended.

    Expiration Date

    Specifies the date at which the key will cease to be usable for performing encryption or signing operations. You will need to either change the expiration date or generate a new key or subkey after this amount of time passes. Sign your new key with your old one before it expires to preserve your trust status.

  6. Click Create to create the new key pair.

    The Passphrase for New PGP Key dialog opens.

  7. Specify the passphrase twice for your new key, then click OK.

    When you specify a passphrase, use the same practices you use when you create a strong password.

8.2.2 Creating Secure Shell Keys

Secure Shell (SSH) is a method of logging in to a remote computer to execute commands on that machine. SSH keys are used in key-based authentication system as an alternative to the default password authentication system. With key-based authentication, there is no need to manually type a password to authenticate.

  1. Click Applications › Utilities › Passwords and Keys.

  2. Click File › New.

  3. Select Secure Shell Key, then click Continue.

  4. Specify a description of what the key is to be used for.

    You can use your e-mail address or any other reminder.

  5. Optionally, click Advanced key options to specify the following advanced options for the key.

    Encryption Type.  Specifies the encryption algorithms used to generate your keys. Select RSA to use the Rivest-Shamir-Adleman (RSA) algorithm to create the SSH key. This is the preferred and more secure choice. Select DSA to use the Digital Signature Algorithm (DSA) to create the SSH key.

    Key Strength.  Specifies the length of the key in bits. The longer the key, the more secure it is (provided a strong passphrase is used). Keep in mind that performing any operation with a longer key requires more time than it does with a shorter key. Acceptable values are between 1024 and 4096 bits. At least 2048 bits is recommended.

  6. Click Just Create Key to create the new key, or click Create and Set Up to create the key and set up another computer to use for authentication.

  7. Specify the passphrase for your new key, click OK, then repeat.

    When you specify a passphrase, use the same practices you use when you create a strong password.

8.3 Modifying Key Properties

You can modify properties of existing OpenPGP or SSH keys.

8.3.1 Editing OpenPGP Key Properties

The descriptions in this section apply to all OpenPGP keys.

  1. Click Applications › Utilities › Passwords and Keys.

  2. Double-click the PGP key you want to view or edit.

  3. Use the options on the Owner tab to add a photo to the key or to change the passphrase associated with the key.

    Photo IDs allow a key owner to embed one or more pictures of themselves in a key. These identities can be signed like normal user IDs. A photo ID must be in JPEG format. The recommended size is 120×150 pixels.

    If the chosen image does not meet the required file type or size, Passwords and Keys can resize and convert it on the fly from any image format supported by the GDK library.

  4. Click the Names and Signatures tab to add a user ID to a key.

    See Section 8.3.1.1, “Adding a User ID” for more information.

  5. Click the Details tab, which contains the following properties:

    Key ID:  The Key ID is similar to the Fingerprint, but the Key ID contains only the last eight characters of the fingerprint. It is generally possible to identify a key with only the Key ID, but sometimes two keys might have the same Key ID.

    Type:  Specifies the encryption algorithm used to generate a key. DSA keys can only sign. ElGamal keys are used to encrypt.

    Strength:  Specifies the length, in bits, of the key. The longer the key, the more security it provides. However, a long key will not compensate for the use of a weak passphrase.

    Fingerprint:  A unique string of characters that exactly identifies a key.

    Created:  The date the key was created.

    Expires:  The date the key can no longer be used (a key can no longer be used to perform key operations after it has expired). Changing a key's expiration date to a point in the future re-enables it. A good general practice is to have a master key that never expires and multiple subkeys that do expire and are signed by the master key.

    Override Owner Trust:  Here you can set the level of trust in the owner of the key. Trust is an indication of how sure you are of a person's ability to correctly extend the Web of trust. When there is a key that you have not signed, the validity of the key is determined from its signatures and how much you trust the people who made those signatures.

    Export Secret Key:  Exports the key to a file.

    Subkeys:  See Section 8.3.1.2, “Editing OpenPGP Subkey Properties” for more information.

  6. Click Close.

8.3.1.1 Adding a User ID

User IDs allow multiple identities and e-mail addresses to be used with the same key. Adding a user ID is useful, for example, when you want to have an identity for your job and one for your friends. They take the following form:

Name (COMMENT) <E-MAIL>
  1. Click Applications › Utilities › Passwords and Keys.

  2. Double-click the PGP key you want to view or edit.

  3. Click the Names and Signatures tab, then click Add Name.

  4. Specify a name in the Full Name field.

    You must enter at least five characters in this field.

  5. Specify an e-mail address in the E-Mail Address field.

    Your e-mail address is how most people will locate your key on a key server or other key provider. Make sure it is correct before continuing.

  6. In the Key Comment field, specify additional information that will display in the name of your new ID.

    This information can be searched for on key servers.

  7. Confirm your changes and enter the passphrase when prompted for it.

8.3.1.2 Editing OpenPGP Subkey Properties

Each OpenPGP key has a single master key used to sign only. Subkeys are used to encrypt and to sign as well. In this way, if your subkey is compromised, you do not need to revoke your master key.

  1. Click Applications › Utilities › Passwords and Keys.

  2. Double-click the PGP key you want to edit.

  3. Click the Details tab, then click to show the Subkeys category.

  4. Use the buttons on the left of the dialog to add, delete, expire, or revoke subkeys.

    Each subkey has the following information:

    ID:  The identifier of the subkey.

    Type:  Specifies the encryption algorithm used to generate a subkey. DSA keys can only sign, ElGamal keys are used to encrypt, and RSA keys are used to sign or to encrypt.

    Usage:  Shows if the key can be used to sign, to certify, or also to encrypt.

    Created:  Specifies the date the key was created.

    Expires:  Specifies the date the key can no longer be used.

    Status:  Specifies the status of the key.

    Strength:  Specifies the length, in bits, of the key. The longer the key, the more security it provides. However, a long key will not compensate for the use of a weak passphrase.

  5. Click Close.

8.3.2 Editing Secure Shell Key Properties

The descriptions in this section apply to all SSH keys.

  1. Click Applications › Utilities › Passwords and Keys.

  2. Double-click the Secure Shell key you want to view or edit.

  3. Use the options on the Key tab to change the name of the key or the passphrase associated with the key.

  4. Click the Details tab, which contains the following properties:

    Algorithm:  Specifies the encryption algorithm used to generate a key.

    Strength:  Indicates the length in bits of a key. The longer the key, the more security it provides. However, a long key does not make up for the use of a weak passphrase.

    Location:  The location where the private key has been stored.

    Fingerprint:  A unique string of characters that exactly identifies a key.

    Export Complete Key:  Exports the key to a file.

  5. Click Close.

8.4 Importing Keys

Keys can be exported to text files. These files contain human-readable text at the beginning and at the end of a key. This format is called an ASCII-armored key.

To import keys:

  1. Click Applications › Utilities › Passwords and Keys.

  2. Click File › Import.

  3. Select a file containing at least one ASCII-armored public key.

  4. Click Open to import the key.

You can also paste keys inside Passwords and Keys:

  1. Select an ASCII-armored public block of text, then copy it to the clipboard.

  2. Click Applications › Utilities › Passwords and Keys.

  3. Click Edit › Paste

8.5 Exporting Keys

To export keys:

  1. Click Applications › Utilities › Passwords and Keys.

  2. Select the keys you want to export.

  3. Click File › Export.

  4. Specify a file name and location for the exported key.

  5. Click Save to export the key.

You can also export keys to the clipboard in an ASCII-armored block of text:

  1. Click Applications › Utilities › Passwords and Keys.

  2. Select the keys you want to export.

  3. Click Edit › Copy.

8.6 Signing a Key

Signing another person's key means that you are giving trust to that person. Before signing a key, carefully check the key's fingerprint to ensure that the key really belongs to that person.

Trust is an indication of how sure you are of a person's ability to correctly extend the Web of trust. When there is a key that you have not signed, the validity of the key is determined from its signatures and how much you trust the people who made those signatures.

  1. Click Applications › Utilities › Passwords and Keys.

  2. Select the key you want to sign from the My Personal Keys or Other Keys tabs.

  3. Click File › Sign.

  4. Select how carefully the key has been checked, then indicate if the signature should be local to your keyring, and if your signature can be revoked:

  5. Click Sign.

8.7 Password Keyrings

You can use password keyring preferences to create or remove keyrings, to set the default keyring for application passwords or to change the unlock password of a keyring. To create a new keyring, follow these steps:

  1. Click Applications › Utilities › Passwords and Keys.

  2. Click File › New › Password Keyring, then click Continue.

  3. Enter a name for the keyring and click Add.

  4. Set and confirm a new Password for the keyring and click Create.

To change the unlock password of an existing keyring, right-click the keyring in the Passwords tab and click Change Password. You need to provide the old password to be able to change it.

To change the default keyring for application passwords, right-click the keyring in the Passwords tab and click Set as Default.

8.8 Key Servers

You can keep your keys up-to-date by synchronizing keys periodically with remote keyservers. Synchronizing will ensure that you have the latest signatures made on all of your keys, so that the Web of trust will be effective.

  1. Click Applications › Utilities › Passwords and Keys.

  2. Click Edit › Preferences, then click the Key Servers tab.

    Passwords and Keys provides support for HKP and LDAP keyservers.

    HKP Key Servers:  HKP key servers are ordinary Web-based key servers, such as the popular hkp://pgp.mit.edu:11371, also accessible at http://pgp.mit.edu.

    LDAP Key Servers:  LDAP key servers are less common, but use the standard LDAP protocol to serve keys. ldap://keyserver.pgp.com is a good LDAP server.

    You can Add or Remove key servers to be used using the buttons on the left. To add a new key server, set its type, host and port, if necessary.

  3. Set whether you want to automatically publish your public keys and which keyserver to use. Set whether you want to automatically retrieve keys from key servers and whether to synchronize modified keys with keyservers.

  4. Click Close.

8.9 Key Sharing

Key Sharing is provided by DNS-SD, also known as Bonjour or Rendezvous. Enabling key sharing adds the local Passwords and Keys users' public key rings to the remote search dialog. Using these local key servers is generally faster than accessing remote servers.

  1. Click Applications › Utilities › Passwords and Keys.

  2. Click Edit › Preferences, then click the Key Servers tab.

  3. Select Automatically synchronize modified keys with key servers.

  4. Click Close.

9 gFTP: Transferring Data from the Internet

  • Filename: apps_gftp.xml
  • ID: cha.gnome.gftp
Abstract

gFTP is a multithreaded file transfer client. It supports the FTP, FTPS (control connection only), HTTP, HTTPS, SSH, and FSP protocols. Furthermore, it allows the transfer of files between two remote FTP servers via FXP. To start gFTP, click Applications › Internet › gFTP.

gFTP—Main Window
Figure 9.1: gFTP—Main Window

9.1 ASCII Compared to Binary Transfers

There are two common ways of transferring files via FTP: ASCII and binary. ASCII mode transfers files as text. ASCII files are .txt, .asp, .html, and .php files, for example. Binary mode transfers files as raw data. Binary files are .wav, .jpg, .gif, and mp3 files, for example.

To change the transfer mode, click the FTP menu and select Binary or Ascii.

When transferring ASCII files from Linux/Unix to Windows or vice versa, open the Preferences dialog by clicking FTP › Preferences. Switch to the FTP tab and select Transfer Files in ASCII Mode to ensure that newline characters are correctly converted. This option will automatically be disabled in Binary mode.

9.2 Connecting to a Remote Server

To connect to a remote server, do the following:

  1. Click Remote › Open Location.

  2. Specify a URL to connect to and click Connect.

  3. Specify your user name and click Connect. Then specify your password and click Connect. To connect anonymously, leave the user name blank.

  4. If the connection is successful, the right part of the gFTP window lists files from the remote computer. The file listing on the left side continues to show files from your local computer. You can now upload and download files via drag and drop or by using the arrow buttons.

To bookmark a site you access frequently, click Bookmarks › Add Bookmark. Specify a name for the bookmark, then click Add. The new bookmark is added to your list of bookmarks.

9.3 Transferring Files

In the following figure, the file list on the right contains the remote server's directory of files. The file list on the left side contains your local computer's directory of files (on your hard disk or network).

gFTP File Transfer
Figure 9.2: gFTP File Transfer

To download files, select the files you want to download in the remote list of files, then click the arrow button pointing to the left. The progress of each download is listed in the field in the middle of the window. If the transfer is successful, the files appear in the directory listing on the left.

To upload a file, select the files you want to upload in your local directory listing on the left, then click the arrow button pointing to the right. The progress of each download is listed in the field in the middle of the window. If the transfer is successful, the files appear in the remote directory listing on the right.

To modify preferences for your downloads, select FTP › Preferences from the menu.

9.4 Setting Up an HTTP Proxy Server

To set up an HTTP proxy server, do the following:

  1. From the menu, select FTP › Preferences, then select the FTP tab.

  2. Enter the Proxy hostname and Proxy port. If applicable, also provide your login credentials for the proxy server. Choose a proxy type from the Proxy Server Type drop-down box.

  3. Click the HTTP tab, and enter the same proxy server information in the dialog as described above. Port numbers for FTP and HTTP proxy may differ.If in doubt, ask your system administrator.

  4. Click OK.

9.5 For More Information

You can find more information about gFTP at http://www.gftp.org.

Part III LibreOffice

10 LibreOffice: The Office Suite

LibreOffice is an open source office suite that provides tools for all types of office tasks such as writing texts, working with spreadsheets, or creating graphics and presentations. With LibreOffice, you can use the same data across different computing platforms. You can also open and edit files in other formats, including Microsoft* Office* formats, then save them back to this format, if needed. This chapter contains information that applies to all LibreOffice modules.

11 LibreOffice Writer

LibreOffice Writer is a full-featured word processor with page and text formatting capabilities. Its interface is similar to interfaces of other major word processors, and it includes some features that are usually found only in desktop publishing applications.

This chapter highlights a few key features of Writer. For more information about these features and for complete instructions for using Writer, look at the LibreOffice help or at the sources listed in Section 10.11, “For More Information”.

Much of the information in this chapter can also be applied to other LibreOffice modules. For example, other modules use styles similarly to how they are used in Writer.

12 LibreOffice Calc

Calc is the LibreOffice spreadsheet module. Spreadsheets consist of several sheets, containing cells which can be filled with elements like text, numbers, or formulas. A formula can manipulate data from other cells to generate a value for the cell in which it is inserted. Calc also allows you to def…

13 LibreOffice Impress, Base, Draw, and Math

Besides LibreOffice Writer and LibreOffice Calc, LibreOffice also includes the modules Impress, Base, Draw, and Math. With these you can create presentations, design databases, draw up graphics and diagrams, and create mathematical formulas.

10 LibreOffice: The Office Suite

  • Filename: apps_libreoffice.xml
  • ID: cha.oo.oview
Abstract

LibreOffice is an open source office suite that provides tools for all types of office tasks such as writing texts, working with spreadsheets, or creating graphics and presentations. With LibreOffice, you can use the same data across different computing platforms. You can also open and edit files in other formats, including Microsoft* Office* formats, then save them back to this format, if needed. This chapter contains information that applies to all LibreOffice modules.

10.1 LibreOffice Modules

LibreOffice consists of several application modules (subprograms) which are designed to integrate with each other. While this chapter contains information that applies to all LibreOffice modules, the following chapters and sections contain information on individual modules. Find a short description and where each module is described in Table 10.1, “The LibreOffice Application Modules”.

A full description of each module is available in the application help, described in Section 10.11, “For More Information”.

Table 10.1: The LibreOffice Application Modules

Module

Purpose

Described in

Writer

Word processor module

Chapter 11

Calc

Spreadsheet module

Chapter 12

Impress

Presentation module

Section 13.1

Base

Database module

Section 13.2

Draw

Module for drawing vector graphics

Section 13.3

Math

Module for generating mathematical formulas

Section 13.4

10.2 Starting LibreOffice

To start LibreOffice, click Applications › Office › LibreOffice. In the LibreOffice start center, choose the type of document you want to create.

There are multiple methods to directly start one of the LibreOffice modules:

  • If any LibreOffice module is open, you can start any of the other modules by clicking File › New and then selecting the type of document you want to create.

  • You can also start individual LibreOffice modules from the menu Applications.

  • As an alternative, use the command libreoffice and one of the options --writer, --calc, --impress, --draw, or --base to start the respective module.

    LibreOffice has many command line options, especially for allowing document conversions. To learn more about the command line options of LibreOffice, see libreoffice --help or the man page of LibreOffice (man libreoffice(1)).

Before you start working with LibreOffice, you may be interested in changing some options from the preferences dialog. Click Tools › Options to open it. The most important ones are:

LibreOffice › User Data

Specify your user data such as company, first and last name, street, city, and other useful information. This data has many uses: It is used in the comment functions of Writer and Calc, for authorship information in PDF documents, and for serial letters in Writer.

LibreOffice › Fonts

Map font names to installed fonts. This can be useful if you exchange documents with others and the document you received contains fonts that are not available on your system.

Load/Save › General

Contains loading and saving specific options. For example, you can choose whether to always create a backup copy and which file format LibreOffice should use by default.

To learn more about configuring LibreOffice, see Section 10.8, “Changing the Global Settings”.

10.3 The LibreOffice User Interface

The user interface of most of LibreOffice is very similar across its modules:

Menu Bar

At the top of the application, there is the menu bar which gives access to almost all functionality of LibreOffice. The menu bar can be customized to include more or fewer functions. You can also add and remove menus.

Toolbars

By default, the toolbars are positioned directly below the menu bar. The toolbars comprise the most used and most important items of the module.

To dock a toolbar to any other side of the window, drag it to the right position. To make a toolbar float, drag it into the middle of the window. They can be customized to include more or fewer functions. You can also add and remove toolbars.

Side Bar

By default, the side bar is positioned at the right side of the LibreOffice window. On the first start of LibreOffice, it is only visible as several icons stacked vertically. Clicking one of the icons opens a panel with more elements. Click the icon again to close the panel. Similarly to the toolbars, the side bar comprises the most important functions.

To dock the side bar to the left or right side of the window, drag it to the right position. To make the side bar float, drag it into the middle of the window. To hide the side bar, click the vertical arrowhead button on the document-facing side of the side bar.

You can hide or show side bar panels but cannot customize their functionality.

Statusbar

The statusbar is displayed at the bottom of the window. It mainly shows information about the document, such as the number of words (in Writer) or the sum of values of selected cells (in Calc). However, it can also be used to change the zoom or language settings. Many elements open additional menus or dialogs on left click, right click, or double click.

For more information on customizing LibreOffice, see Section 10.7, “Customizing LibreOffice”.

10.4 Compatibility with Other Office Applications

The native file format of LibreOffice is the OpenDocument format. OpenDocument is an ISO-standardized format for office documents that is based on XML. However, LibreOffice can also work with documents, spreadsheets, presentations, and databases in many other formats, including Microsoft Office formats. Files in Microsoft Office formats can be opened and saved back normally.

10.4.1 Opening Documents from Other Office Suites

If you use LibreOffice in an environment where you need to share documents with Microsoft Word users, you should have little or no trouble exchanging document files. However, very complex documents can require editing after opening. Complex documents are documents containing, for example, complicated tables, Microsoft Office macros, or unusual fonts, formatting, or graphical objects.

In case there should ever be issues with opening documents, try the following strategies:

  • Text Documents.  Consider opening text documents in the original application and saving them as RTF or plain text (TXT). However, saving as plain text means that all formatting will be lost.

  • Spreadsheets.  Consider opening spreadsheets in the original application and saving them as Excel files. If this does not work, try the CSV format. However, saving as CSV means that all formatting, cell type definitions, formulas, and macros will be lost.

10.4.2 Converting Documents to the OpenDocument Format

LibreOffice can read, edit, and save documents in several formats. It is not necessary to convert files from those formats to the OpenDocument format used by LibreOffice to use those files. However, if you want to convert the files, you can do so. To convert several documents, such as when first switching to LibreOffice, do the following:

  1. Select File › Wizards › Document Converter.

  2. Choose the file format from which to convert.

  3. Click Next.

  4. Specify where LibreOffice should look for templates and documents to convert and in which directory the converted files should be placed.

    Documents retrieved from a Windows partition are usually in a subdirectory of /windows.

  5. Make sure that all other settings are correct, then click Next.

  6. Review the summary of the actions to perform, then start the conversion by clicking Convert.

    The amount of time needed for the conversion depends on the number of files and their complexity. For most documents, conversion does not take long.

  7. When everything is done, close the Wizard.

10.4.3 Sharing Files with Users of Other Office Suites

LibreOffice is available for several operating systems. This makes it an excellent tool when a group of users frequently need to share files and do not use the same system on their computers.

When sharing documents with others, you have several options:

If the recipient needs to be able to edit the file

Save the document in the format the other user needs. For example, to save as a Microsoft Word file, click File › Save As, then select the Microsoft Word file type for the version of Word the other user needs.

If the recipient only needs to read the document

Export the document to a PDF file with File › Export as PDF. PDF files can be read on any platform using a PDF viewer.

Sharing a document for editing

Agree on a common exchange format that works for everyone. TXT and RTF formats, although limited in formatting, can be a good option for text documents.

E-mailing a document as a PDF

Click File › Send › E-mail as PDF. Your default e-mail program opens with the file attached.

E-mailing a document to a Microsoft Word user

Click File › Send › E-mail as Microsoft Word. Your default e-mail program opens with the file attached.

10.5 Saving Files with a Password

You can save files, no matter in which LibreOffice format, with a password. Unlike older versions of LibreOffice, the encryption applied to the document with recent versions of LibreOffice is very strong. However, this encryption does not protect file names and file sizes of encrypted files. If that is important to you, see the alternate encryption methods described in Chapter 11, Encrypting Partitions and Files.

Procedure 10.1:
  1. To save a file with a password, select File › Save or File › Save As.

  2. In the dialog that opens, activate the check box Save with password at the bottom and click Save.

  3. Type and confirm your password, then click OK.

The next time you open the file, you will be prompted for the password.

To change the password, do either of the following:

  • Overwrite the same file by selecting File › Save As. Make sure Save with Password is deactivated.

  • Select File › Properties and click Change Password to access the password dialog.

10.6 Signing Documents

You can digitally sign documents to protect them. For this, you need a personal certificate, similar to an HTTPS certificate. You can either create a self-signed certificate or choose to obtain one from a Certificate Authority.

When applying a digital signature to a document, a kind of checksum is created from the document's content and your personal key. The checksum is stored together with the document.

When another person opens the document, the checksum will be generated again. The new checksum is then compared to the original checksum. If both are equal, the application will signal that the document has not been changed in the meantime.

To add a certificate to LibreOffice, you need to use Firefox:

Procedure 10.2:
  1. Start Firefox by selecting Applications › Internet › Firefox.

  2. Go to the certificates preferences by opening the menu (Three-lines button), then select Preferences › Advanced › Certificates › View Certificates.

  3. Add your certificate by selecting Your Certificates and clicking Import and then locate your certificate.

To sign a document, first open it in LibreOffice. Then select File › Digital Signatures › Sign Document. Select the certificate you want to use for signing, then click OK.

SUSE Linux Enterprise Desktop allows you to access certificates from the certificate store. For more information, refer to Chapter 12, Certificate Store.

10.7 Customizing LibreOffice

You can customize LibreOffice to best suit your needs and working style. Toolbars, menus, and key combinations can all be reconfigured to help you more quickly access the features you use the most.

You can also assign macros to application events if you want specific actions to occur when those events take place. For example, if you always work with a specific spreadsheet, you can create a macro that opens the spreadsheet and assign the macro to the Start Application event.

This section contains simple, generic instructions for customizing your environment. The changes you make are effective immediately. This means you can see if the changes are what you wanted and go back and modify them if they are not. See the LibreOffice help files for detailed instructions.

To access the customization dialog in any open LibreOffice module, select Tools › Customize.

Customization Dialog in Writer
Figure 10.1: Customization Dialog in Writer
Note
Note: Further Information

Click Help for more information about the options in the Customize dialog.

Procedure 10.3: Customizing Toolbars
  1. In the customization dialog, click the tab Toolbar.

  2. From the drop-down box Toolbar, select the toolbar you want to customize.

  3. Activate the check boxes next to the commands you want to appear on the toolbar, and deactivate the check boxes next to the commands you do not want to appear. A short description for each command is shown at the bottom of the dialog.

  4. With Save In, select whether to save your customized toolbar in the current LibreOffice module or in the current document. If you decide to save it in the LibreOffice module, the customized toolbar is used whenever you open that module. If you decide to save it together with the current document, the customized toolbar is used whenever you open that document.

  5. Repeat to customize additional toolbars.

  6. Click OK.

To switch back to the original settings again, open the customization dialog, click the Toolbar drop-down box and select Restore Default Settings. Click Yes and Reset to proceed.

Procedure 10.4: Showing or Hiding Buttons in the Toolbar
  1. Click the arrow icon at the right edge of the toolbar you want to change.

  2. Click Visible Buttons to display a list of buttons.

  3. Select the buttons in the list to enable (check) or disable (uncheck) them.

Procedure 10.5: Customizing Menus

You can add or delete items from current menus, reorganize menus, and even create new menus.

  1. Click Tools › Customize › Menus.

  2. Select the menu you want to change, or click New to create a new menu.

  3. Modify, add, or delete menu items as desired.

  4. Click OK.

Procedure 10.6: Customizing Key Combinations

You can reassign currently assigned key combinations and assign new ones to frequently used functions.

  1. Click Tools › Customize › Keyboard.

  2. Select the keys you want to assign to a combination.

  3. Select a Category and an appropriate function.

  4. Click Modify to assign the function to the key or Delete to remove an existing assignment.

  5. Click OK.

Procedure 10.7: Customizing Events

LibreOffice also provides ways to assign macros to events such as application start-up or the saving of a document. The assigned macro runs automatically whenever the selected event occurs.

  1. Click Tools › Customize › Events.

  2. Select the event you want to change.

  3. Assign or remove macros for the selected event.

  4. Click OK.

10.8 Changing the Global Settings

Global settings can be changed in any LibreOffice module by clicking Tools › Options on the menu bar. This opens the window shown in the figure below. A tree structure is used to display categories of settings.

The Options Window
Figure 10.2: The Options Window

The settings categories that appear depend on the module you are working in. For example, if you are in Writer, the LibreOffice Writer category appears in the list, but the LibreOffice Calc category does not. The LibreOffice Base category appears in both Calc and Writer. The Module column in the table shows where each setting category is available.

The following table lists the settings categories along with a brief description of each category:

Table 10.2: Global Setting Categories

Settings Category

Description

Module

LibreOffice

Basic settings, including your user data (such as your address and e-mail), important paths, and settings for printers and external programs.

All

Load/Save

Settings related to the opening and saving of several file types. There is a dialog for general settings and several special dialogs to define how external formats should be handled.

All

Language Settings

Settings related to languages and writing aids, such as your locale and spell checker settings. This is also the place to enable support for Asian languages.

All

LibreOffice Writer

Settings related to word processing, such as the basic units, fonts and layout that Writer should use.

Writer

LibreOffice Writer/Web

Settings related to the HTML authoring features of LibreOffice.

Writer

LibreOffice Calc

Settings related to spreadsheets, such as spreadsheet appearance, Microsoft Excel compatibility options, and calculation options.

Calc

LibreOffice Impress

Settings related to presentations, such as enabling the smartphone remote control and the grid of the page to use.

Impress

LibreOffice Draw

Settings related to drawings, such as the grid of the page to use.

Draw

LibreOffice Base

Allows setting and editing database connections and registered databases.

Base

Charts

Allows defining the default colors used for newly created charts.

All

Internet

Allows configuring a proxy and the e-mail software to use.

All

Important
Important: Settings Apply Globally

All settings listed in the table apply globally for the specified modules. That means, they are used as defaults for every new document you create.

10.9 Using Templates

A template is a document containing only the styles—and content— that you want to appear in every document of that type. When a document is created or opened with the template, the styles are automatically applied to that document. Templates greatly enhance the use of LibreOffice by simplifying formatting tasks for a variety of different types of documents.

For example, in a word processor, you can write letters, memos, and reports, all of which look different and require different styles. Or, for example, for spreadsheets, you could use different cell styles or headings for certain types of spreadsheets. If you use templates for each of your document types, the styles you need for each document are always readily available.

LibreOffice comes with a set of predefined templates. You can also find additional templates on the Internet, for example at http://templates.libreoffice.org. For details, see Section 10.11, “For More Information”.

Creating your own templates requires some planning. You need to determine how you want the document to look, so you can create the styles you need in that template.

A detailed explanation of templates is beyond the scope of this section. Procedure 10.8, “Creating LibreOffice Templates” only shows how to generate a template from an existing document.

Procedure 10.8: Creating LibreOffice Templates

For text documents, spreadsheets, presentations, and drawings, you can create a template from an existing document as follows:

  1. Start LibreOffice and open or create a document that contains the styles and content that you want to re-use for other documents of that type.

  2. Click File › Templates › Save as Template.

  3. Choose a directory to save the image in by double-clicking one of the directory icons.

    If you are in a subdirectory and want to go up again, use the path bar displayed above the directories.

  4. From the toolbar, choose Save.

  5. Specify a name for the template.

  6. Click OK.

Note
Note: Converting Microsoft Word Templates

You can convert Microsoft Word templates like you would convert any other Word document. For more information, see Section 10.4.2, “Converting Documents to the OpenDocument Format”.

10.10 Setting Metadata and Properties

When exchanging documents with other people, it is sometimes useful to store metadata like the owner of the file, who it was received from, and a URL. LibreOffice lets you attach such metadata to the file. This helps you track metadata which you do not want to or cannot save in the content of the file. This feature is also the basis for later sorting, searching and retrieving your documents based on metadata.

As an example, we assume you want to set these properties to your file:

  • A title, subject, and some keywords

  • The owner of the file

  • Who sent you the file

To attach such metadata to your document, proceed as follows:

Procedure 10.9: Setting Properties
  1. Click File › Properties. A dialog opens. It has, among others, the following tabs:

  2. Change to the Description tab and insert title, subject, and your keywords.

  3. Switch to the Custom Properties tab.

  4. To add a row for a property, click Add.

  5. In the Name column, click the drop-down box for the entry. A list of properties appears, from it, choose Owner.

  6. Insert the name of the owner in the Value column.

  7. Repeat from Step 4 but as the name of the property, this time, choose Received from.

    Optionally, repeat from Step 4 for more properties.

    To remove a property, click the red icon at the end of the corresponding row.

  8. Leave the dialog with OK.

  9. Save the file.

10.11 For More Information

LibreOffice contains extensive online help. In addition, a large community of users and developers support it. The following list shows some places where you can go for additional information.

LibreOffice Application Help (Help › LibreOffice Help)

Extensive help on performing any task in LibreOffice.

https://www.libreoffice.org

Home page of LibreOffice

https://ask.libreoffice.org

Official question and answer page for LibreOffice.

http://www.taming-libreoffice.com/

Taming LibreOffice: books, news, tips and tricks.

http://www.pitonyak.org/oo.php

Extensive information about creating and using macros.

http://extensions.libreoffice.org/

Extension and template directory for LibreOffice.

https://www.worldlabel.com/Pages/openoffice-template.htm

Templates for creating labels with LibreOffice.

11 LibreOffice Writer

  • Filename: apps_librewriter.xml
  • ID: cha.oo.writer
Abstract

LibreOffice Writer is a full-featured word processor with page and text formatting capabilities. Its interface is similar to interfaces of other major word processors, and it includes some features that are usually found only in desktop publishing applications.

This chapter highlights a few key features of Writer. For more information about these features and for complete instructions for using Writer, look at the LibreOffice help or at the sources listed in Section 10.11, “For More Information”.

Much of the information in this chapter can also be applied to other LibreOffice modules. For example, other modules use styles similarly to how they are used in Writer.

11.1 Creating a New Document

There are multiple ways to create a new Writer document:

  • From Scratch.  To create a new empty document, click File › New › Text Document.

  • Using a Wizard.  To use a standard format and predefined elements for your own documents, use a wizard. Click File › Wizards › Letter and follow the steps.

  • From a Template.  To use a template, click File › New › Templates and open, for example, Business Correspondence. From the list of text document templates, select the one that fits your needs.

For example, to create a business letter, click File › Wizards › Letter. Using the wizard, you can easily create a basic document using a standard format. A sample wizard dialog is shown in Figure 11.1.

A LibreOffice Wizard
Figure 11.1: A LibreOffice Wizard

Enter text in the document window as desired. Use the tools for applying and changing styles or the tools for direct formatting to adjust the appearance of the document. Use the File menu or the relevant buttons in the toolbar to print and save your document. With the options under Insert, add extra items to your document, such as a table, picture, or chart.

11.2 Formatting with Styles

The traditional way of formatting office documents is direct formatting. That means, you use a button, such as Bold, which sets a certain property (in this case, a bold typeface). With styles, you can bundle a set of properties (for example, font size and font weight) and give them a speaking name, such as Headline, first level. Using styles, rather than direct formatting has the following advantages:

  • Gives your pages, paragraphs, texts, and lists a consistent look.

  • Makes it easy to consistently change formatting later.

  • Allows reuse and import of styles from another document.

  • Change one style and its properties are passed on to its descendants.

Example 11.1: Use of Styles

Imagine that you emphasize text by selecting it and clicking the button Bold. Later, you decide you want the emphasized text to be italicized. Now, without styles, you need to find all bold text and manually change it to italics.

If you had used a character style from the beginning, however, you would only need to change the style from bold to italics once. All text formatted with a style changes its appearance as the style is changed.

LibreOffice can use styles for applying consistent formatting to various elements in a document. The following types of styles are available in Writer:

Table 11.1: Types of Styles

Type of Style

Function

Paragraph

Applies standardized formatting to the various types of paragraphs in your document. For example, apply a paragraph style to a first-level heading to set the font and font size, spacing above and below the heading, location of the heading, and other formatting specifications.

Character

Applies standardized formatting for types of text. For example, if you want emphasized text to appear in italics, you can create an emphasis style that italicizes selected text when you apply the style to it.

Frame

Applies standardized formatting to frames. For example, if your document uses marginal notes, you can create frames with specified borders, location, and other formatting, so that all of your marginal notes have a consistent appearance.

Frames are also used for captioning images: A frame can keep the caption and the image together. Here, you can use frame style to make sure that all your images have the same size and background color, for example.

Page

Applies standardized formatting to a specified type of page. For example, if every page of your document contains a header and footer except for the first page, you can use a first page style that disables headers and footers. You can also use different page styles for left and right pages so that you have bigger margins on the insides of pages and your page numbers appear on an outside corner.

List

Applies standardized formatting to specified list types. For example, you can define a checklist with square check boxes and a bullet list with round bullets, then easily apply the correct style when creating your lists.

Direct formatting overrides any styles you have applied. For example, format a piece of text both with a character style and using the button Bold. Now, the text will be bold, no matter what is set in the style.

To remove all direct formatting, first select the appropriate text, then right-click it and choose Clear Direct Formatting.

Likewise, if you manually format paragraphs using Format › Paragraph, you can end up with inconsistent paragraph formatting. This is especially true if you copy and paste paragraphs from other documents with different formatting. However, if you apply paragraph styles, formatting remains consistent. If you change a style, the change is automatically applied to all paragraphs formatted with that style.

11.2.1 The Side Bar Panel Styles and Formatting

The side bar panel Styles and Formatting is a versatile formatting tool for applying styles to text, paragraphs, pages, frames, and lists. To open this panel, click Styles › Styles and Formatting, click the button Styles and Formatting (a T) in the side bar or press F11.

Styles and Formatting Panel
Figure 11.2: Styles and Formatting Panel

LibreOffice comes with several predefined styles. You can use these styles as they are, modify them, or create new styles. Use the icons at the top of the panel to display formatting styles for the most common elements such as paragraphs, frames, pages or lists. To learn more about styles, continue with the instructions below.

11.2.2 Applying a Style

To apply a style, select the element you want to apply the style to, and double-click the style in the panel Styles and Formatting. For example, to apply a style to a paragraph, place the cursor anywhere in that paragraph and double-click the desired paragraph style.

Alternatively, use the paragraph style selector in the toolbar Formatting.

11.2.3 Changing a Style

By changing styles, you can change formatting throughout a document, rather than applying the change separately everywhere you want to apply the new formatting.

To change an existing style, proceed as follows:

  1. In the panel Styles and Formatting, right-click the style you want to change.

  2. Click Modify.

  3. Change the settings for the selected style.

    For information about the available settings, refer to the LibreOffice online help.

  4. Click OK or Apply.

11.2.4 Creating a Style

LibreOffice comes with a collection of styles to suit many needs of most users. However, if you need a style that does not yet exist and want to create your own style, follow the procedure below:

Procedure 11.1: Creating a New Style
  1. Open the panel Styles and Formatting with Styles › Styles and Formatting, or pressing F11.

  2. Make sure you are in the list of styles for the type of style you want to create.

    For example, if you are creating a character style, make sure you are in the character style list by clicking the corresponding icon in the panel Styles and Formatting.

  3. Right-click anywhere in the list of styles in the panel Styles and Formatting.

  4. To open the style dialog, click New. The Organizer tab is preselected.

  5. Configure three basic properties of the new style:

    Name

    The name of your style. Choose any name you like.

    Next Style

    The style that follows your style. The style selected here is used when starting a new paragraph by pressing Enter. This is useful, for example, for headlines, after which you usually want to start a normal paragraph of text.

    Inherit From

    A style that your style depends on. If the selected style is changed, your style changes as well. For example, to make headers consistent, create a parent header style and have subsequent headers depend on it. This is useful when you only want to change the properties that need to be different.

    For details about the style options available in any tab, click the Help button of the dialog.

  6. Confirm with OK. This closes the window.

11.2.4.1 Example: Defining a Note Style

Let us assume, you need a note with a different background and borders. To create such a style, proceed as follows:

Procedure 11.2: Creating a Note Style
  1. Press F11. The panel Styles and Formatting opens.

  2. Make sure you are in the Paragraph Style list by checking that the pilcrow icon (¶) is selected.

  3. Right-click anywhere in the list of styles in the panel Styles and Formatting and select New.

  4. Specify the following parameters in the tab Organizer:

    Name

    Note

    Next Style

    Note

    Inherit from

    - None -

    Category

    Custom Styles

  5. Change the indentation in the tab Indents & Spacing, using the text field Before Text. If you also want more space above and below individual paragraphs, change the values in the Above paragraph and Below paragraph accordingly.

  6. Switch to the tab Background and choose a color for the background.

  7. Switch to the tab Borders and determine your line arrangements, line style, color and other parameters.

  8. Confirm with OK. This closes the window.

  9. Select your text in your document and double-click the style Note. Your style parameters are applied to the text.

11.2.4.2 Example: Defining an Even-Odd Page Style

If you want to create double-sided printouts of your documents, especially if they are supposed to be bound, use templates for even and odd pages. To create page styles for this, proceed as follows:

Procedure 11.3: Create an Even (Left) Page Style
  1. Press F11. The panel Styles and Formatting opens.

  2. Make sure you are in the list Page Style by checking that the paper sheet icon is selected.

  3. Right-click anywhere in the list of styles in the panel Styles and Formatting and select New.

  4. Enter the following parameters in the tab Organizer:

    Name

    Left Content Page

    Next Style

    Leave empty, will be changed later

    Inherit from

    not applicable

    Category

    not applicable

  5. Change additional parameters as you like in the other tabs. You can also adapt the page format and margins (tab Page) or any headers and footers.

  6. Confirm with OK. This closes the window.

Procedure 11.4: Create an Odd (Right) Page Style
  1. Follow the instruction in Procedure 11.3, “Create an Even (Left) Page Style” but use the string Right Content Page in the Organizer tab.

  2. Select the entry Left Content Page from the drop-down box Next Style.

  3. Choose the same parameters as you did for the left page style. If you used different sizes for the left and right margin of your even page, mirror these values in your odd pages.

  4. Confirm with OK. This closes the window.

Then connect the left page style with the right page style:

Procedure 11.5: Connect the Right Page Style with the Left Page Style
  1. Right-click the entry Left Content Page and choose Modify.

  2. Choose Right Content Page from the drop-down box Next Style.

  3. Confirm with OK. This closes the window.

To attach your style, make sure your page is a left (even) page and double-click Left Content Page. Whenever your text exceeds the length of a page, the following page automatically receives the alternative page style.

11.3 Working with Large Documents

You can use Writer to work on large documents. Large documents can be either a single file or a collection of files assembled into a single document.

11.3.1 Navigating in Large Documents

The Navigator tool displays information about the contents of a document. It also lets you quickly jump to different elements. For example, use the Navigator to get a quick overview of all images included in a document.

To open the Navigator, click View › Navigator or press F5. The elements listed in the Navigator vary with the document loaded in Writer.

The Navigator is also an element of the side bar: There, it can be opened using the button Navigator (a compass).

Double-click an item in the Navigator to jump to that item in the document.

Navigator Tool in Writer
Figure 11.3: Navigator Tool in Writer

11.3.2 Using Master Documents

If you are working with a very large document, such as a book, it can be easier to manage the book with a master document, rather than keeping the book in a single file. A master document enables you to quickly apply formatting changes to a large document or to jump to each subdocument for editing.

A master document is a Writer document that serves as a container for multiple Writer files. You can maintain chapters or other subdocuments as individual files collected in the master document. Master documents are also useful if multiple users are working on a single document. You can separate each user’s section of the document into subdocuments collected in a master document, allowing multiple writers to work on their subdocuments at the same time without fear of overwriting the work of others.

Procedure 11.6: Creating a Master Document
  1. Click New › Master Document.

    or

    Open an existing document and click File › Send › Create Master Document.

  2. The Navigator window will open. In it, select Insert (Insert), then choose File.

  3. Select a file to add an existing file to the master document.

Procedure 11.7: Adding a New Document to a Master Document
  1. In the window or panel Navigator, choose Insert (Insert), then select New Document.

  2. A file chooser opens, to allow saving the new document. Specify a name, then click Save.

  3. When you are done editing the new document, save it. Then switch back to the master document.

  4. Update the master document with the contents of the new document. To do so, right-click the entry of your new document in the Navigator, then click Update › Selection.

To enter some text directly into the master document, select Insert › Text.

The LibreOffice help files contain more complete information about working with master documents. Look for the topic named Using Master Documents and Subdocuments.

Tip
Tip: Styles and Templates in Master Documents

The styles from all of your subdocuments are imported into the master document. To ensure that formatting is consistent throughout your master document, use the same template for each subdocument. Doing so is not mandatory.

However, if subdocuments are formatted differently, you might need to do some reformatting to successfully bring subdocuments into the master document without creating inconsistencies. For example, if two documents within a master document include styles with the same name, the master document will use the formatting specified for the style in the document imported first.

11.4 Using Writer as an HTML Editor

In addition to being a full-featured word processor, Writer also functions as an HTML editor. You can style HTML pages like any other document, but there are specific HTML Styles that help with creating good HTML. You can view the document as it will appear online, or you can directly edit the HTML code.

Procedure 11.8: Creating an HTML Page
  1. Click File › New › HTML Document.

  2. Press F11 to open the panel Styles and Formatting.

  3. At the bottom of the panel Styles and Formatting, click the drop-down box to open it.

  4. Select HTML Styles.

  5. Create your HTML page, using the styles to tag your text.

  6. Click File › Save As.

  7. Select the location where you want to save your file and name the file. Make sure that in the bottom drop-down box, HTML Document is selected.

  8. Click OK.

To edit HTML code directly or to see the HTML code created when you edit the HTML file as a Writer document, click View › HTML Source. In HTML Source mode, the Formatting and Styles list is not available.

The first time you switch to HTML Source mode, you are prompted to save the file as HTML, if you have not already done so.

To switch back from HTML Source mode to Web Layout, click View › HTML source again.

12 LibreOffice Calc

  • Filename: apps_librecalc.xml
  • ID: cha.oo.calc

Calc is the LibreOffice spreadsheet module. Spreadsheets consist of several sheets, containing cells which can be filled with elements like text, numbers, or formulas. A formula can manipulate data from other cells to generate a value for the cell in which it is inserted. Calc also allows you to define ranges, filter and sort data and creates charts from data to present it graphically. Using pivot tables, you can combine, analyze or compare larger amounts of data.

This chapter can only introduce some very basic Calc functionality. For more information and for complete instructions, see the LibreOffice application help and the sources listed in Section 10.11, “For More Information”.

Note
Note: VBA Macros

Calc can process many VBA macros in Excel documents. However, support for VBA macros is not complete. When opening an Excel spreadsheet that makes heavy use of macros, you might discover that some do not work.

12.1 Creating a New Document

There are two ways to create a new Calc document:

  • From Scratch.  To create a new empty document, click File › New › Spreadsheet.

  • From a Template.  To use a template, click File › New › Templates and open, for example, Finances. From the list of spreadsheet templates, select the one that fits your needs.

Access the individual sheets by clicking their respective tabs at the bottom of the window.

Enter data in the cells as desired. To adjust the appearance, either use the Formatting toolbar or side bar panel, or use the Format menu—or define styles as described in Section 12.2, “Using Formatting and Styles in Calc”. Use the File menu or the relevant buttons in the toolbar to print and save your document.

12.2 Using Formatting and Styles in Calc

Calc comes with a few built-in cell and page styles to improve the appearance of your spreadsheets and reports. Although these built-in styles are adequate for many uses, you will probably find it useful to create styles for your own frequently used formatting preferences.

Procedure 12.1: Creating a Style
  1. Click Format › Styles › Styles and Formatting or press F11.

  2. At the top of the panel Styles and Formatting, click either Cell Styles (a green cell) or Page Styles (a document).

  3. Right-click anywhere in the list of styles in the panel Styles and Formatting. Then click New.

  4. Specify a name for the style and use the various tabs to set the desired formatting options.

  5. When you are done configuring the style, click OK.

Procedure 12.2: Modifying a Style
  1. Click Format › Styles › Styles and Formatting.

  2. At the top of the panel Styles and Formatting, click either Cell Styles (a green cell) or Page Styles (a document).

  3. Right-click the name of the style you want to change, then click Modify.

  4. Change the desired formatting options.

  5. When you are done configuring the style, click OK.

To apply a style to specific cells, select the cells you want to format. Then double-click the style you want to apply in the Styles and Formatting window.

12.3 Working With Sheets

Sheets are a good method to organize your calculations. For example, if you have a business, accounting might be much clearer if you create a sheet for each month.

To insert a new sheet after the last sheet, click the button + next to the sheet tabs.

To insert one or more new sheets into your spreadsheet from a file or at a specific position at once, do the following:

Procedure 12.3: Inserting New Sheets
  1. Right-click a sheet tab and select Insert Sheet. A dialog opens.

  2. Decide whether the new sheet should be positioned before or after the selected sheet.

  3. To create a new sheet, make sure the New Sheet radio button is activated. Enter the number of sheets and the sheet name. Skip the rest of this step.

    Alternatively, to import a sheet from another file, do the following:

    1. Select From file and click Browse.

    2. Select the file name and confirm with OK. All the sheet names are now displayed in the list.

    3. Select the sheet names you want to import by holding the Shift key and clicking them.

  4. To add the sheet or sheets, confirm with OK.

To rename a sheet, right-click the tab of the sheet and select Rename Sheet. Alternatively, you can also double-click the sheet tab.

To delete one or multiple sheets, do the following: Select the sheet you want to delete. To select more than one sheet, hold down Shift while making the selection. Then right-click the tab of the sheet, choose Delete Sheet and confirm with Yes.

12.4 Conditional Formatting

Conditional formatting is a useful feature to highlight certain values in your spreadsheet. For example, define a condition and if the condition is true, a style is applied to each cell that fulfills this condition.

Note
Note: Enable AutoCalculate

Before you apply conditional formatting, choose Tools › Cell Contents › AutoCalculate. You should see a check mark in front of AutoCalculate.

Procedure 12.4: Using Conditional Formatting
  1. Define a style first. This style is applied to each cell when your condition is true. Use Format › Styles and Formatting or press F11. For more information, see Procedure 12.1, “Creating a Style”. Confirm with OK.

  2. Select the cell range where you want to apply your condition.

  3. Select Format › Conditional Formatting › Condition from the menu. A dialog opens.

  4. You now see a template for a new condition. Conditions can operate in multiple modes:

    Cell value is

    The condition tests if a cell matches a certain value. Next to the first drop-down box, select an operator such as equal to, less than, or greater than.

    Formula is

    The condition tests if a certain formula returns true.

    Date is

    The condition tests if a certain date value is reached.

    All Cells

    This mode allows creating data visualizations that depend on the value of a cell, similarly to Cell value is. However, with All Cells, you can use one condition to apply an entire range of styles.

    The types of styles that can be used are color scales (cell background color), data bars (bars with changing width in the cell) and icon sets (an icon in the cell).

    For example, a color scale allows assigning 0 a black background and 100 a green background. All values in between are calculated automatically. For example, 50 receives a dark green background.

  5. For this example, keep the default: Cell value is.

  6. Select an operator and the value of the cell you want to test for.

  7. Choose the style you want to apply when this condition is true or click New Style to define a new appearance.

  8. If you need additional conditions, click Add. Then repeat the previous steps.

  9. Confirm with OK. Now the style of your cells has changed.

12.5 Grouping and Ungrouping Cells

Grouping a cell range allows hiding parts of a spreadsheet. This makes spreadsheets more readable, as you can hide all the parts you are not currently interested in. It is possible to group rows or columns and nest groups in other groups.

To group a range, proceed as follows:

Procedure 12.5: Grouping a Selected Cell Range
  1. Select a cell range in your spreadsheet.

  2. Select Data › Group and Outline › Group. A dialog appears.

  3. Decide if you want to group your selected range by rows or by columns. Confirm with OK.

After grouping selected cells, a line indicating the grouped cell range appears in the upper-left margin. Fold or unfold the cell range with the + and icons. The numbers at the top left of the margins display the depth of your groups and can be clicked too.

To ungroup a cell range, click into a cell which belongs to a group and select Data › Group and Outline › Ungroup. The line in the margin disappears. The innermost group is always deleted first.

12.6 Freezing Rows or Columns as Headers

If you have a spreadsheet with lots of data, scrolling usually makes the header disappear. LibreOffice can lock rows or columns or both, so they remain fixed as you scroll around.

To freeze a single row or a single column, proceed as follows:

Procedure 12.6: Freezing a Single Row or Column
  1. To create a frozen area before a row, click the header of the row (1, 2, 3, ...).

    Alternatively, to create a frozen area above a column, click the header of the column (A, B, C, ...).

  2. Select View › Freeze Rows and Columns. A dark line appears, indicating the frozen area.

It is also possible to freeze both rows and columns:

Procedure 12.7: Freezing Row and Column
  1. Click into the cell to the right of the column and below the row you want frozen. For example, if your header occupies the space from A1 to B3, click cell C4.

  2. Select View › Freeze Rows and Columns. A dark line appears, indicating which area is frozen.

To unfreeze, select View › Freeze Rows and Columns. The check mark before the menu item disappears.

13 LibreOffice Impress, Base, Draw, and Math

  • Filename: apps_librevarious.xml
  • ID: cha.oo.various

Besides LibreOffice Writer and LibreOffice Calc, LibreOffice also includes the modules Impress, Base, Draw, and Math. With these you can create presentations, design databases, draw up graphics and diagrams, and create mathematical formulas.

13.1 Using Presentations with Impress

Use LibreOffice Impress to create presentations for screen display or printing. If you have used other presentation software, Impress makes it easy to switch. It works very similarly to other presentation software.

13.1.1 Creating a Presentation

There are multiple ways to create a new Impress document:

  • From Scratch.  To create a new empty document, click File › New › Presentation.

  • Using a Wizard.  To use a standard format and predefined elements for your documents use a wizard. Click File › Wizards › Presentation and follow the steps.

  • From a Template.  To use a template, click File › New › Templates and open, for example, Presentation Backgrounds. From the list of presentation templates, select the one that fits your needs.

The following procedure describes how to create a presentation by using the wizard. Proceed as follows:

Procedure 13.1: Creating a Presentation Using the Wizard
  1. Start LibreOffice.

  2. Select File › Wizards › Presentation.

  3. Choose From template. Select Presentation Backgrounds from the pop-up menu to set your preferred background and click Next.

  4. Select an output medium. The output medium is the form the final presentation will take, such as: Overhead sheet, Paper, a slideshow on a 4:3 Screen or a 16:9 Widescreen, among other choices.

    To see a thumbnail showing your choices, make sure Preview is activated. If all options are set according to your wishes, click Next.

  5. To use effects for slide transitions, select an Effect and its Speed. The effect will be previewed immediately.

  6. Either use the default presentation type or choose Automatic to specify the amount of time each page displays and the length of the pause between presentations.

  7. If all options are set according to your wishes, click Create.

The presentation opens, ready for editing.

13.1.2 Using Master Pages

Master pages give your presentation a consistent look by defining what fonts and other design elements are used. Impress uses two types of master pages:

Slide Master

Contains elements that appear on all slides. For example, you might want your company logo to appear in the same place on every slide. The slide master also determines the text formatting style for the heading and outline of every slide that uses that master page, as well as any information you want to appear in a header or footer.

Notes Master

Determines the formatting and appearance of the notes in your presentation.

13.1.2.1 Creating a Slide Master

Impress comes with a collection of preformatted master pages. To customize presentations further, create your own slide masters.

  1. Start Impress with an existing presentation or create a new one as described in Section 13.1.1, “Creating a Presentation”.

  2. Click View › Slide Master.

    This opens the current slide master in Master View. The Master View toolbar appears.

  3. Right-click the left-hand panel, then click New Master.

  4. Edit the slide master until it has the desired look.

    Master view allows editing outline styles by directly formatting the sample text on the slide.

  5. To finish editing slide masters, in the Master View toolbar, click Close Master View. Alternatively, choose View › Normal.

Tip
Tip: Collect Slide Masters in a Template

When you have created all of the slide masters you want to use in your presentations, you can save them in an Impress template. Then, any time you want to create presentations that use those slide masters, open a new presentation with your template.

13.1.2.2 Applying a Slide Master

Slide masters can be applied to selected slides or to all slides of a presentation.

  1. Open your presentation.

  2. (Optional) To apply a slide master to multiple slides but not all slides: Select the slides that you want a slide master applied to.

    To select multiple slides, pressCtrl in the Slides Pane while clicking the slides you want to use.

  3. In the Tasks pane, open the Master Pages and click the master page you want to apply. The slide master is applied to the corresponding page or pages.

    If you do not see the Task Pane, click View › Task Pane.

13.2 Using Databases with Base

LibreOffice includes the database module Base. Use Base to design a database to store many kinds of information. From a simple address book or recipe file to a sophisticated document management system.

Tables, forms, queries, and reports can be created manually or by using convenient wizards. For example, the Table Wizard contains several common fields for business and personal use. Databases created in Base can be used as data sources, such as when creating form letters.

It is beyond the scope of this document to detail database design with Base. Find more information at the sources listed in Section 10.11, “For More Information”.

13.2.1 Creating a Database Using Predefined Options

Base comes with several predefined database fields to help you create a database. A wizard guides you through the steps to create a new database. The steps in this section are specific to creating an address book using predefined fields, but it should be easy to follow them to use the predefined fields for any of the built-in database options.

The process for creating a database can be broken into several subprocesses:

13.2.1.1 Creating the Database

  1. Start LibreOffice Base.

    The Database Wizard starts.

    You can choose between creating an HSQLDB or Firebird database.

    HSQLDB Embedded (default)

    This database format is also available in older versions of OpenOffice.org and LibreOffice. It depends on Java being installed on the computer.

    Firebird Embedded

    This database format can only be used in newer versions of LibreOffice. It does not depend on Java. When you do large database operations, Firebird can perform better.

  2. Proceed with Next.

  3. Click Yes, register the database for me to make your database information available to other LibreOffice modules and select the check boxes to Open the database for editing and Create tables using the table wizard. Then click Finish.

  4. Browse to the directory where you want to save the database, specify a name for the database, then click Save.

13.2.1.2 Setting Up the Database Table

After you have created the database, if you have selected the Create tables using the table wizard check box, the table wizard opens. If you have not, go to the Task area and click Use Wizard to Create Table. Next, define the fields you want to use in your database table.

In this example, set up an address database.

  1. For this example, click Personal.

    The list Sample tables changes to show the predefined tables for personal use where the address table template is. The table templates listed under Business contain predefined business tables.

  2. In the Sample tables list, click Addresses.

    The available fields for the predefined address book appear in the Available fields menu.

  3. In the Available fields menu, click the fields you want to use in your address book.

    Select one item at a time by clicking. Alternatively, to select multiple items, hold Shift and click each of the items separately.

  4. Click the icons single right arrow and single left arrow to move selected items to or off the Selected fields list.

    To move all available fields to the Selected fields menu, click the icon double right arrow.

  5. Use the icons up arrow and down arrow to adjust the order of the selected entries, then click Next.

    The fields appear in the table and forms in the order in which they are listed.

  6. Make sure each of the fields is defined correctly.

    You can change the field name, type, maximum characters and whether it is a required field. For this example, leave the settings as they are, then click Next.

  7. Make sure that Create a primary key and Automatically add a primary key are activated. Additionally activate Auto value.

    Proceed with Next.

  8. Give the table a name, and activate Create a form based on this table.

    Proceed with Finish.

13.2.1.3 Creating a Form

Next, create the form to use when entering data into your address book.

After the previous step, you should be in the Form Wizard already. Otherwise, open it by going to the main window. Under Tables, right-click the correct table. Click Form Wizard.

  1. In the Form Wizard, click the double right-arrow icon to move all available fields to the Fields in the form list, then click Next.

  2. To add a subform, activate Add Subform, then click Next.

    For this example, accept the default selections.

  3. Select how you want to arrange your form, then click Next.

  4. Select The form is to display all data and leave all of the check boxes deactivated, then click Next.

  5. Apply a style and field border, then click Next.

    For this example, accept the default selections.

  6. Name the form, activate Modify the form, then click Finish.

13.2.1.4 Modifying the Form

After the form has been defined, you can modify the appearance of the form to suit your preferences.

After the previous step, you should be in the Database Form editor already. If not, select the right form by clicking Forms in the side bar of the main window. Then, in the Forms area, right-click the correct form. Select Edit.

  1. Arrange the fields on the form by dragging them to their new locations.

    For example, move the field First Name, so it appears to the right of the field Last Name.

  2. When you have finished modifying the form, save it and close it.

13.2.1.5 Further Steps

After you have created your database tables and forms, you are ready to enter your data. You can also design queries and reports to help sort and display the data.

Refer to LibreOffice online help and other sources listed in Section 10.11, “For More Information” for additional information about Base.

13.3 Creating Graphics with Draw

Use LibreOffice Draw to create graphics and diagrams. You can export your drawings to the most common vector graphics formats and import them into any application that lets you import graphics, including other LibreOffice modules. You can also create Adobe* Flash* (SWF) versions of your drawings.

Procedure 13.2: Creating a Graphic
  1. Start LibreOffice Draw.

  2. Use the toolbar Drawing at the right side of the window to create a graphic. To create a new shape or text object, use the shape buttons of the toolbar:

    • To create a single shape or text object, click a shape button once. Then click and drag over the document to create an object.

    • To create a multiple shape or text object, double-click a shape button. Then click and drag over the document to create objects. When you are done, click the mouse pointer icon in the toolbar.

  3. Save the graphic.

To embed an existing Draw graphic into a LibreOffice document, select Insert › Object › OLE Object. Select Create from file and click Search to navigate to the Draw file to insert.

To be able to edit the graphic later on its own, activate Link to file.

If you insert a file as OLE object, you can edit the object later by double-clicking it.

Procedure 13.3: Opening Draw From Other LibreOffice Modules

One particularly useful feature of Draw is the ability to open it from other LibreOffice modules, so you can create a drawing that is automatically imported into your document.

  1. From a LibreOffice module (for example, from Writer), click Insert › Object › OLE Object › LibreOffice Drawing › OK.

    The user interface of Writer will now be replaced by that of Draw.

  2. Create your drawing.

  3. Click in your document, outside the Draw frame.

    The drawing is automatically inserted into your document.

13.4 Creating Mathematical Formulas with Math

It is usually difficult to include complex mathematical formulas in your documents. To make this task easier, the LibreOffice Math equation editor lets you create formulas using operators, functions, and formatting assistants. You can then save those formulas as an object that can be imported into other documents. Math functions can be inserted into other LibreOffice documents like any other graphic object.

Note
Note: Math is For Creating Mathematical Formulas

Math is not a calculator. The functions it creates are graphical objects. Even if they are imported into Calc, these functions cannot be evaluated.

To create a formula, proceed as follows:

  1. Start LibreOffice Math.

  2. Click File › New › Formula. The formula window opens.

  3. Enter your formula in the lower part of the window. For example, the binomial theorem in LibreOffice Math syntax is:

    (a + b)^2 = a^2 + 2 a b + b^2

    The result is displayed in the upper part of the window.

  4. Use the side bar panel Formula Elements or right-click the lower part of the window to insert other terms. If you need symbols, use Tools › Symbols to, for example, insert Greek or other special characters.

  5. Save the document.

The result is shown in Figure 13.1, “Mathematical Formula in LibreOffice Math”:

Mathematical Formula in LibreOffice Math
Figure 13.1: Mathematical Formula in LibreOffice Math

It is possible to include your formula in Writer, for example. To do so, proceed as follows:

  1. Create a new Writer document or open an already existing one.

  2. Select Insert › Object › OLE Object in the main menu. The Insert OLE Object window appears.

  3. Select Create from file.

  4. Click Search to locate your formula. To choose the formula file, click Open.

    To be able to edit the formula later on its own, activate Link to file.

  5. Confirm with OK. The formula is inserted at the current cursor position.

Part IV Internet, Communication and Collaboration

14 Firefox: Browsing the Web

The Mozilla Firefox Web browser is included with SUSE® Linux Enterprise Desktop. With features like tabbed browsing, pop-up window blocking and download management, Firefox combines the latest browsing and security technologies with an easy-to-use interface. Firefox gives you easy access to different search engines to help you find the information you need.

15 Evolution: E-Mailing and Calendaring

Evolution makes storing, organizing, and retrieving your personal information easy, so you can work and communicate more effectively with others. It is a professional groupware program and an important part of the Internet-connected desktop.

16 Empathy: Instant Messaging

Empathy is an instant messaging (IM) client that allows you to connect to multiple accounts simultaneously. Chat live with your contacts in one tabbed interface, regardless of which IM system they use. Empathy uses Telepathy for protocol support.

17 Ekiga: Using Voice over IP

Ekiga is an application you can use for making phone calls via Voice over IP (VoIP), for video conferencing and for instant messaging.

14 Firefox: Browsing the Web

  • Filename: apps_firefox.xml
  • ID: cha.firefox
Abstract

The Mozilla Firefox Web browser is included with SUSE® Linux Enterprise Desktop. With features like tabbed browsing, pop-up window blocking and download management, Firefox combines the latest browsing and security technologies with an easy-to-use interface. Firefox gives you easy access to different search engines to help you find the information you need.

14.1 Starting Firefox

To start Firefox, select Applications › Internet › Firefox.

14.2 Navigating Web Sites

The look and feel of Firefox is similar to that of other browsers. It is shown in Figure 14.1, “The Browser Window of Firefox”. At the top of the window you find the location bar for a Web address, and the search bar. Bookmarks are also available for quick access from the bookmarks toolbar. For more information about the various Firefox features, use the Help menu in the menu bar.

Note
Note: Using the Menu Bar

While most functions of Firefox are available through the three-lines button (Three-lines button), some are only available from the menu bar.

The menu bar of Firefox is hidden by default. To temporarily show it, press Alt. It will then be displayed until you click elsewhere in the Firefox window.

To permanently enable the Firefox menu bar, first press Alt, then choose View › Toolbars and activate Menu Bar.

The Browser Window of Firefox
Figure 14.1: The Browser Window of Firefox

14.2.1 The Location Bar

When typing into the location bar, an auto-completion drop-down box opens. It shows all previous location addresses and bookmarks containing the characters you type. The matching phrase is highlighted in bold. Entries visited most frequently and recently are listed first.

List entries from the bookmark list are marked with a star. Bookmarks with tags are marked with an additional label followed by the tag names. List entries from the browsing history are not marked. To search in your bookmarks only, type * as the first character of your search.

Use and or the mouse wheel to navigate the list. Press Enter or click an entry to go to the selected page. Del removes an entry from the list if it is an entry from the history. Bookmarked entries can only be removed by deleting the associated bookmark.

14.2.2 Zooming

Firefox offers two zooming options: page zoom, the default, and text zoom. Page zoom enlarges the entire page as is, with all elements of a page, including graphics, expanding equally while text zoom only changes the text size.

To toggle between page and text zoom, from the menu bar, choose View › Zoom › Zoom Text Only. To zoom in or out either use the mouse wheel while holding the Ctrl key, or use Ctrl+ and Ctrl-. Reset the zoom factor with Ctrl0.

14.2.3 Tabbed Browsing

Tabbed browsing allows you to load multiple Web sites in a single window. To switch between pages in use, use the tabs at the top of the window. If you often use more than one Web page at a time, tabbed browsing makes it easier to switch between pages.

Opening Tabs

To open a new tab, from the menu bar, select File › New Tab or press CtrlT. This opens an empty tab in the Firefox window. To open a link on a Web page or a bookmark in a tab, middle-click it. Alternatively, right-click a link and select Open Link in New Tab. You may also open an address in the location bar in a new tab with a middle-click or by pressing CtrlEnter.

Closing Tabs

Right-click a tab to open a context menu, giving you access to tab managing options such as closing, reloading, or bookmarking. To close a tab, you may also use CtrlW or click the close button. Any closed tab can be restored by choosing from the menu bar, History › Recently Closed Tabs. To reopen the last closed tab, either choose Undo Close Tab from the context menu or press CtrlShiftT.

Sorting Tabs

By default, tabs are sorted in the order you opened them. Rearrange the tab order by dragging and dropping a tab to the desired position. If you have opened a large number of tabs, they cannot all be displayed in the tab bar at the same time. Use the arrows at the ends of the bar to move left or right-click the down arrow at the right end of the tab bar to get a list of all tabs.

Dragging and Dropping

Drag and drop also works with tabs. Drag a link onto an existing tab to open it in that tab or drag and drop a link on an empty space in the tab bar to open a new tab. Drag and drop a tab outside of the tab bar to open it in a new browser window.

14.2.4 Using the Sidebar

Use the left side of your browser window for viewing bookmarks or browsing history. Extensions may add new ways to use the sidebar as well. To display the sidebar, from the menu bar, select View › Sidebar and select the desired contents.

14.3 Finding Information

There are two ways to find information in Firefox: to search the Internet with a search engine, use the search bar. To search the page currently displayed, use the find bar.

14.3.1 Finding Information on the Web

Firefox has a search bar that can access different engines like Google, Yahoo, or Amazon. For example, if you want to find information about SUSE using the current engine, click in the search bar, type SUSE, and press Enter. The results appear in your window.

To choose a different search engine, type your search term, then click one of the search provider icons at the bottom of the appearing pop-up.

14.3.1.1 Customizing the Search Bar

If you want to change the order, add, or delete a search engine, proceed as follows.

  1. Click the icon to the left of the search bar.

  2. From the pop-up, select Change Search Settings. The Search dialog shows the engine that is currently set as default search engine and other available search engines.

  3. To change the order of entries, use the mouse to drag them.

    To delete an entry, select it and click Remove.

    To add a search engine, click Add More Search Engines. Firefox displays a Web page with available search plug-ins. To install a search plug-in, select it and click Add to Firefox.

Some Web sites offer search engines that you can add directly to the search bar. Whenever you are visiting such a Web site, the icon to the left of the search bar gains a + sign. Click the icon and select Add.

14.3.1.2 Adding Keywords to Your Online Searches

Firefox lets you define own keywords: abbreviations to use as a URL shortcut for a particular search engine. If you have defined ws as a keyword for the Wikipedia search for example, you can type ws SEARCHTERM into the location bar to search Wikipedia for SEARCHTERM.

To assign a shortcut for a search engine from the search bar, click the icon to the left of the search bar and select Change Search Settings. Select a search engine, double-click its Keyword column, enter a keyword and press Enter.

It is also possible to define a keyword for any search field on a Web site. Proceed as follows:

  1. Right-click the search field and choose Add a Keyword for this Search from the menu that opens. The Add Bookmark dialog appears.

  2. In Name, enter a descriptive name for this keyword.

  3. Enter your Keyword for this search.

  4. Save this keyword.

Tip
Tip: Keywords for Regular Web Sites

Using keywords is not restricted to search engines. You can also add a keyword to a bookmark (via the bookmark's properties). For example, if you assign suse to the SUSE home page bookmark, you can open it by typing suse into the location bar.

14.3.2 Searching in the Current Page

To search inside a Web page, in the menu bar, click Edit › Find or press CtrlF. The find bar opens. It is usually displayed at the bottom of a window. Type your query in the text box. Firefox finds the first occurrence of this phrase as you type. You can find other occurrences of the phrase by pressing F3 or the Next button in the find bar. Clicking the Highlight All button will highlight all occurrences of the phrase. Checking the Match Case option makes the query case-sensitive.

Firefox also offers two quick-find options. Click anywhere you like to start a search on a Web page, type the key / immediately followed by the search term. The first occurrence of the search term will be highlighted as you type. Use F3 to find the next occurrence. It is also possible to limit quick-find to links only. This search option is available by typing the key '.

14.4 Managing Bookmarks

Bookmarks offer a convenient way of saving links to your favorite Web sites. Firefox not only makes it very easy to add new bookmarks with just one mouse click, it also offers multiple ways to manage large bookmark collections. You can sort bookmarks into folders, classify them with tags, or filter them with smart bookmark folders.

Add a bookmark by clicking the star in the location bar. The star will turn blue to indicate the page was bookmarked. The bookmark will be saved in the Unsorted Bookmarks folder under the page title. To change the name and folder of your bookmark or add tags, after bookmarking, click the star again. This will open a pop-up where you can make your changes.

To bookmark all open tabs, right-click in a tab and choose Bookmark All Tabs. Firefox asks you to create a new folder for the tab links.

To remove a bookmark, open the bookmarked location. Then, click the star and click Remove Bookmark.

14.4.1 Organizing Bookmarks

The Library can be used to manage the properties (name and address location) for each bookmark and organize the bookmarks into folders and sections. It resembles Figure 14.3, “The Firefox Bookmark Library”.

The Firefox Bookmark Library
Figure 14.3: The Firefox Bookmark Library

To open the Library, in the menu bar, click Bookmarks › Show All Bookmarks. The library window is split into two parts: the left pane shows the folder tree view, the right pane the subfolders and bookmarks of the selected folder. Use Views to customize the right pane. The left pane contains three main folders:

History

Contains your complete browsing history. You cannot alter this list other than by deleting entries from it.

Tags

Lists bookmarks for each tag you have specified. See Section 14.4.2, “Tags” for more information on tags.

All Bookmarks

This category contains the three main bookmark folders:

Bookmarks Toolbar

Contains the bookmarks and folders displayed beneath the location bar. See Section 14.4.6, “The Bookmarks Toolbar” for more information.

Bookmarks Menu

Holds the bookmarks and folder accessible via the Bookmarks entry in the main menu or the bookmarks side menu.

Unsorted Bookmarks

Contains all bookmarks created with a single click the star in the location bar. This folder is only visible in the library and the bookmarks sidebar.

Organize your bookmarks using the right pane. Choose actions for folders or bookmarks either from the context menu that opens when you right-click an item or from the Organize menu. The properties of a chosen folder or bookmark can be edited in the bottom part of the right pane. By default, only Name, Location, and Tags are displayed for a bookmark. Click the arrow next to More to gain access to all properties.

To rearrange your bookmarks, use the mouse to drag them. You can use this to move a bookmark or a folder to a different folder, or to change the order of bookmarks in a folder.

14.4.2 Tags

Tags offer a convenient way to file a bookmark under several categories. You can tag a bookmark with as many terms as you want. For example, to access all sites tagged with suse, enter suse into the location bar. For each tag, an item is automatically created in the Recent Tags folder of the library. Drag and drop an item for a tag onto the bookmark toolbar to easily access it.

To add tags to a bookmark, open the bookmark in Firefox and click the yellow star in the location bar. The Edit This Bookmark dialog opens where you can add a comma separated list of tags. It is also possible to add tags via the bookmark's properties dialog which you can open in the library or by right-clicking a bookmark in the menu or the toolbar.

14.4.3 Importing and Exporting Bookmarks

To import bookmarks from another browser or from a file in HTML format, open the library by choosing from the menu bar, Bookmarks › Show All Bookmarks. To start the Import Wizard, click Import and Backup › Import Bookmarks from HTML and choose an import location. Start the import by clicking Next. Imports from an HTML file are imported as is.

Exporting bookmarks is also done via Import and Backup in the library window. To save your bookmarks as an HTML file, choose Export Boomarks to HTML. To create a backup of your bookmarks, choose Backup. Firefox uses a JavaScript Object Notation file format (.json) for backups.

To restore a bookmark backup, click Import and Backup › Restore. Then locate the backup you want to restore from.

14.4.4 Live Bookmarks

Live Bookmarks display headlines in your bookmark menu and keep you up to date with the latest news. This enables you to save time with one glance at your favorite sites. Live bookmarks update automatically. Many sites and blogs support this format.

To create a Live Bookmark, look for orange buttons on Web sites that either read RSS or consist of a dot and three nested quarter circles. Click the icon. Usually, that will lead you to a page where all the headlines of the page are displayed. On that page, choose Subscribe Now. A dialog opens in which to select the name and location of your live bookmark. Confirm with Add. This page also lets you choose alternative applications to subscribe with, such as My Yahoo.

14.4.5 Smart Bookmark Folders

Smart bookmark folders are virtual bookmark folders that are dynamically updated. There are three smart bookmark folders: The Most Visited links are available from your bookmarks toolbar. Recently Bookmarked links and Recent Tags are located in the bookmarks menu.

14.4.6 The Bookmarks Toolbar

The Bookmarks Toolbar is displayed beneath the location bar and lets you quickly access bookmarks. You can also add, organize, and edit bookmarks directly. By default, the Bookmarks Toolbar is populated with a predefined set of bookmarks organized into several folders (see Figure 14.1, “The Browser Window of Firefox”).

To manage the Bookmarks Toolbar you can use the library as described in Section 14.4.1, “Organizing Bookmarks”. Its content is located in the folder Bookmarks Toolbar. It is also possible to manage the toolbar directly. To add a folder, bookmark, or separator, right-click an empty space in the toolbar and select the appropriate entry from the pop-up menu. To add the current page to the bar, click the icon of the Web page in the location bar and drag it to the desired position on the bookmarks toolbar. Hovering over an existing bookmark folder will automatically open it, enabling you to place the bookmark within this folder.

To manage a certain folder or bookmark, right-click it. A menu opens which lets you Delete it or change its Properties. To move or copy an entry, choose Cut or Copy and Paste it to the desired position.

14.5 Using the Download Manager

Keep track of your current and past downloads with the download manager. To start the download manager, in the menu bar, click Tools › Downloads. While downloading a file, a progress bar indicates the download status. If necessary, pause the download and resume it later. To open a downloaded file with the associated application, click Open. To open the location to which the file was saved, choose Open Containing Folder. Remove From History only deletes the entry from the download manager, however, it does not delete the file from the hard disk.

By default, all files are downloaded to ~/Downloads. To change this behavior, in the menu bar, click Edit › Preferences. Go to General. Under Downloads, either choose another location or Always ask me where to save files.

Tip
Tip: Resuming Downloads

If your browser crashes or is closed while downloading, all pending downloads will automatically be resumed in the background when starting Firefox the next time. A download that was paused before the browser was closed can manually be resumed via the download manager.

14.6 Security

Since browsing the Internet has become more risky, Firefox offers various measures to make browsing safer. It automatically checks whether you are trying to access a site known to contain harmful software (malware) or a site known to steal sensitive data (phishing) and stops you from entering these sites. The Instant Web Site ID lets you easily check a site's legitimacy, and a password manager and the pop-up blocker offer additional security. With Private Browsing, you can surf the Internet without Firefox recording data on your computer.

14.6.1 Instant Web Site ID

Firefox allows you to check the identity of a Web page with a single glance. The icon in the location bar next to the address indicates which identity information is available and whether communication is encrypted:

Gray Globe

The site does not provide any identity information and communication between Web server and browser is not encrypted. Do not exchange sensitive information with such sites.

Gray Triangle

This site is from a domain that has been verified by a certificate, so you can be sure that you are really connected to the very site it claims to be. However, the site tried to load additional elements, such as images or scripts over an insecure connection. Firefox has blocked these items. Therefore, the page can look broken.

Gray Padlock

This site is from a domain that has been verified by a certificate, so you can be sure that you are really connected to the very site it claims to be. Communication with a gray-padlock site is always encrypted.

Green Padlock

This site completely identifies itself by a certificate that ensures a site is owned by the person or organization it claims to be. This is especially important when exchanging very sensitive data (for example when doing money transactions over the Internet). In this case you can be sure to be on the Web site of your bank when it sends complete identity information. Communication with a green-padlock server is always encrypted.

To view detailed identity information, click the icon of the Web site in the location bar. In the opening pop-up, click More Information to open the Page Info window. Here, you can view the site's certificate, the encryption level, and information about stored passwords and cookies.

With the Permissions view you can set per-site permissions for image loading, pop-ups, cookies and installation permissions. The Media view lists all images, background graphics and embedded objects from a site and displays further information on each item together with a preview. It also lets you save individual items.

The Firefox Page Info Window
Figure 14.4: The Firefox Page Info Window

14.6.2 Importing Certificates

Firefox comes with a certificate store for identifying certificate authorities (CA). Using these certificates enables the browser to automatically verify certificates issued by Web sites. If a Web site issues a certificate that has not been signed by one of the CAs from the certificate store, it is not trusted. This ensures that no spoofed certificates are accepted.

Large organizations usually use their own certificate authorities in-house and distribute the respective certificates via the system-wide certificate store located at /etc/pki/nssdb. To configure Firefox (and other Mozilla tools, such as Thunderbird) to use this system-wide CA store in addition to its own, export the NSS_USE_SHARED_DB variable. For example, you can add the following line to ~/.bashrc:

export NSS_USE_SHARED_DB=1

Alternatively or additionally you can manually import certificates. To do so, in the menu bar, open the Preferences dialog by clicking Edit › Preferences. Select Advanced › Certificates › View Certificates › Your Certificates › Import and select the certificate to import. Only import certificates you absolutely trust!

14.6.3 Password Management

Each time you enter a user name and a password on a Web site, Firefox offers to store this data. A pop-up at the top of the page opens, asking you whether you want Firefox to remember the password. If you accept by clicking Remember, the password will be stored on your hard disk in an encrypted format. The next time you access this site, Firefox will automatically fill in the login data.

To review or manage your passwords, open the password manager by clicking Edit › Preferences › Security › Saved Passwords in the menu bar. The password manager opens with a list of sites and their corresponding user names. By default, the passwords are not displayed. You can click Show Passwords to display them. To delete single or all entries from the list, click Remove or Remove All, respectively.

To protect your passwords from unauthorized access, you can set a master password that is required when managing or adding passwords. In the menu bar, click Edit › Preferences, choose the category Security and activate Use a Master Password.

14.6.4 Private Browsing

By default, Firefox keeps track of your browsing history by storing content and links of visited Web sites, cookies, downloads, passwords, search terms and formula data. Collecting and storing this data makes browsing faster and more convenient. However, when you use a public terminal or a friend's computer, for example, you could turn this behavior off. In Private Browsing mode Firefox will not keep track of your browsing history nor will it cache the content of pages you have visited.

To enable the Private Browsing mode, in the menu bar, click File › New Private Window. The current Web site and all open tabs will be replaced by the Private Browsing information screen. As long as you will browse in private mode, the string (Private Browsing) will be displayed in the titlebar of the window.

Disable Private Browsing by closing the private window.

To make Private Browsing the default mode, open the Privacy tab in the Preference window as described in Section 14.7.1, “Preferences”, set the option Firefox will: to Use custom settings for history and then choose Always use private browsing mode.

Warning
Warning: Bookmarks and Downloads

Downloads and bookmarks you made during Private Browsing mode will be kept.

14.7 Customizing Firefox

Firefox can be customized extensively.

  • Change the way Firefox behaves by altering its preferences.

  • Add functionality by installing extensions.

  • Change the look and feel by installing themes.

To manage extensions, themes and plug-ins, Firefox has an add-on manager.

14.7.1 Preferences

Firefox offers a wide range of configuration options. These are available by choosing Edit › Preferences in the menu bar. Each option is described in detail in the online help, which can be accessed by clicking the question mark icon in the dialog.

The Preferences Window
Figure 14.5: The Preferences Window

14.7.1.1 Session management

By default, Firefox automatically restores your session—windows and tabs—only after it has crashed, or after a restart because of an extension. However, it can be configured to restore a session every time it is started: Open the Preferences dialog as described in Section 14.7.1, “Preferences” and go to the category General. Set the option When Firefox Starts: to Show My Windows and Tabs from Last Time.

When you have multiple windows open they will only be restored the next time when you close all of them at once with File › Quit (from the menu bar) or with CtrlQ. If you close the windows one by one, only the last window will be restored.

14.7.1.2 Language Preferences for Web Sites

When sending a request to a Web server, the browser always sends the information about which language is preferred by the user. Web sites that are available in more than one language (and are configured to evaluate this language parameter) will display their pages in the language the browser requests. On SUSE Linux Enterprise Desktop, the preferred language is preconfigured to use the same language as the desktop. To change this setting, open the Preferences window as described in Section 14.7.1, “Preferences”, go to the category Content and Choose your preferred language.

14.7.1.3 Spell Checking

By default, Firefox spell-checks what you type when typing into multiple-line text boxes. Misspelled words are underlined in red. To correct a word, right-click it and select the correct spelling from the context menu. You may also add the word to the dictionary, if it is correct.

To change or add a dictionary, right-click anywhere in a multi-line text box and select the appropriate option from the context menu. Here you may also disable spell-checking for this text box. If you want to globally disable spell checking, open the Preferences window as described in Section 14.7.1, “Preferences” and go to the category Advanced. Deactivate Check My Spelling As I Type.

14.7.2 Add-ons

Extensions let you personalize Firefox to fit your needs. With extensions, you can change the look and feel of Firefox, enhance existing functionality, and add functions. For example, extensions can enhance the download manager, show the weather, or control Web music players. Other extensions assist Web developers or increase security by blocking content such as ads or scripts.

There are thousands of extensions available for Firefox. With the add-ons manager, you can install, enable, disable, update, and remove extensions.

If you do not like the standard look and feel of Firefox, install a new theme. Themes do not change the functionality, only the appearance of the browser.

14.7.2.1 Installing Add-ons

To add an extension or theme, start the add-ons manager with Tools › Add-Ons from the menu bar. It opens on the Get Add-Ons tab either displaying a choice of recommended add-ons or the results of your last search.

Use the Search All Add-Ons field to search for specific add-ons. Click an entry in the list to view a short description. Install the add-on by clicking Install or open a Web page with detailed information by clicking the More link.

Installing Firefox Extensions
Figure 14.6: Installing Firefox Extensions

To activate freshly installed extensions or themes, Firefox sometimes needs to be restarted by clicking Restart now in the add-ons manager. Restart this way to make sure that your browsing session will be restored.

14.7.2.2 Managing Add-ons

The Add-ons Manager also offers a convenient interface to manage extensions, themes, and plug-ins. Extensions can be enabled, disabled or uninstalled. If an extension is configurable, its configuration options can be accessed via the Preferences button. In the Appearance tab you may Uninstall a theme, or activate a different theme by clicking Enable. Pending extension and theme installations are also listed. Select Cancel to stop the installation. Although you cannot install Plug-Ins as a user, you may disable or enable them with the Add-ons manager.

Some add-ons require you to restart the browser when you uninstall or disable them. In such cases, after clicking either of these actions, a Restart now link appears in the add-ons manager.

14.8 Printing from Firefox

Before you actually print a Web page, you can use the print preview function to control how the printed page will look like. From the menu bar, choose File › Print Preview. Configure paper size and orientation per printer with Page Setup.

To print a Web page either choose, from the menu bar, File › Print or press CtrlP. The Printer dialog opens. To print with the default options click Print.

The Printer dialog also offers extensive configuration options to fine-tune the printout. On the General tab, choose a printer, the range to print, the number of copies and the order. Page Setup lets you specify the number of pages per side, the scaling factor, and paper source and type. If the printer supports it, you can also activate double-sided printing here. Control how frames, backgrounds, header and footer are printed on the Options tab.

14.9 For More Information

To get more information about Firefox see the following links:

Mozilla forums: https://www.mozilla.org/about/forums/
Main Menu reference: http://support.mozilla.org/kb/Menu+reference
Preferences reference: http://support.mozilla.org/kb/Options+window
Keyboard shortcuts: http://support.mozilla.org/kb/Keyboard+shortcuts

15 Evolution: E-Mailing and Calendaring

  • Filename: apps_evolution.xml
  • ID: cha.gnome.evolution

Evolution makes storing, organizing, and retrieving your personal information easy, so you can work and communicate more effectively with others. It is a professional groupware program and an important part of the Internet-connected desktop.

Evolution can help you work in a group by handling e-mail, contact information, and one or more calendars. It can do that on one or several computers, connected directly or over a network, for one person or for large groups.

Evolution helps you accomplish common daily tasks quickly. For example, you can easily reuse appointment or contact information sent to you by e-mail, or send e-mails to a contact or appointment. If you receive lots of e-mail, you can use advanced features like search folders, which let you save searches as though they were ordinary e-mail folders.

This chapter introduces you to Evolution and helps you get started. For more details, refer to the Evolution application help.

15.1 Starting Evolution

To start Evolution, click Applications › Internet › Evolution.

15.2 Setup Assistant

The first time you start Evolution, it opens an assistant to help you set up e-mail accounts and import data from other applications.

The Evolution Account Assistant helps you provide all the required information.

15.2.1 Restoring from a Backup File

When the assistant starts, the Welcome page is displayed. Proceed to the Restore from Backup page. If you previously backed up your Evolution configuration and want to restore it, activate the restoration option and select the backup file in the file chooser dialog.

Otherwise, proceed to Identity.

15.2.2 Defining Your Identity

The Identity page is the next step in the assistant.

  1. Type your full name in the Full Name field.

  2. Type your e-mail address in the E-mail Address field.

  3. (Optional) (Optional) Type an address in the Reply-To field.

    Only use this field if you want replies to e-mails from you to be sent to a different e-mail address.

  4. (Optional) (Optional) Type your organization name in the Organization field.

    This is the company where you work, or the organization you represent when you send e-mails.

  5. Proceed to the next page.

15.2.3 Receiving Mail

The Receiving E-mail page lets you determine the server that you want to use to receive e-mail.

You need to specify the type of server you want to receive mail from. If you are not sure about the type of server, contact your system administrator or e-mail provider.

Select a server type in the Server Type list. The following is a list of available server types:

Exchange Web Services:  Allows you to connect to newer Microsoft Exchange servers to synchronize e-mail, calendar, and contact information. This is only available if you have installed the connector for Microsoft* Exchange* which is packaged in evolution-ews .

IMAP+:  Keeps the e-mail on your server, so you can access your e-mail from multiple systems.

POP:  Downloads your e-mail to your hard disk for permanent storage, freeing up space on the e-mail server.

USENET News:  Connects to a news server and downloads a list of available news digests.

Local Delivery:  If you want to move e-mail from the spool and store it in your home directory, you need to provide the path to the mail spool you want to use. If you want to leave mail in your system’s spool files, select Standard Unix Mbox Spool File instead.

MH Format Mail Directories:  To download your e-mail using mh or an mh-style program, you need to provide the path to the mail directory you want to use.

Maildir Format Mail Directories:  If you download your e-mail using Qmail or another Maildir-style program, select this option. You need to provide the path to the mail directory you want to use.

Standard Unix Mbox Spool File or Directory:  To read and store e-mail in the mail spool on your local system, select this option. You need to provide the path to the mail spool you want to use.

None:  If you do not plan to check e-mail with this account, select this option. There are no configuration options.

15.2.3.1 Configuration Options for IMAP+, POP, and USENET

If you selected IMAP+, POP, or USENET News as the server type, you need to specify additional information.

If you are not sure about the correct server address, user name or security setting, contact your system administrator or e-mail provider.

  1. Type the host name of your e-mail server into the text box Server.

  2. Type your user name for the account into the text box Username.

  3. Choose a security setting supported by your mail server. For security reasons, avoid using No Encryption.

  4. Select your authentication type in the Authentication list. To have Evolution check for supported authentication types, click Check for Supported Types. Then choose one of the options without a strikeout.

    Some servers do not announce the authentication mechanisms they support. Therefore clicking this button is not a guarantee that the shown mechanisms actually work.

  5. Proceed to the next page.

15.2.3.2 Configuration Options for Exchange Web Services

If you selected Exchange Web Services as the server type, you need to specify additional information.

If you are not sure about the correct server address, user name or security setting, contact your system administrator or e-mail provider.

  1. Type your user name for the account into the text box Username.

  2. Type the EWS URL of your e-mail server into the text box Host URL.

    If available, type the address of an Offline Address Book into the text box OAB URL.

    If your login name and the name of your mailbox differ, select Open Mailbox of other user. Then type the mailbox name into the text box below.

  3. Select an authentication type in the Authentication list. To have Evolution check for supported authentication types, click Check for Supported Types. Then choose one of the options without a strikeout.

    Some servers do not announce the authentication mechanisms they support. Therefore clicking this button is not a guarantee that the shown mechanisms actually work.

  4. Proceed to the next page.

15.2.3.3 Local Configuration Options

If you selected Local Delivery, MH-Format Mail Directories, Maildir-Format Mail Directories, or Standard Unix Mbox Spool File or Directory, specify the path to the local files or directories in the path field.

15.2.4 Receiving Options

After you have selected a mail delivery mechanism, you can set some preferences for its behavior.

15.2.4.1 IMAP+ Receiving Options

If you selected IMAP+ as the receiving server type, you will now see a page of options to specify the behavior of Evolution.

  1. You can choose from the following options:

    Check for new messages every ... minutes

    Select if you want Evolution to automatically check for new mail. Set how often to check.

    Check for new message in all folders

    Select if you want to check for new messages in all folders.

    Check for new message in subscribed folders

    Select if you want to check for new messages in subscribed folders.

    Use Quick Resync if the server supports it

    Select to use Quick Resync which makes browsing mail faster on supported servers.

    Listen for server change notifications

    Select if you want Evolution to listen for change notifications. If you activate this option, Evolution will show you mail as it arrives. Therefore, you can usually deactivate Check for new messages every ... minutes.

    Show only subscribed folders

    Select if you want Evolution to show only subscribed folders.

    You can unsubscribe from folders to cut down on the number of irrelevant folders shown in Evolution and to reduce the amount of mail that is downloaded.

    Apply filters to new messages in all folders

    Select if you want to apply filters to new messages, and whether to do so in all folders or only in the Inbox folder.

    Check new messages for Junk contents

    Select if you want to check new messages for junk content, and whether to do so in all folders or only in the Inbox folder.

    Automatically synchronize remote mail locally

    Select this to download all your mail, so you can read it offline.

  2. Proceed to the next page.

15.2.4.2 POP Receiving Options

If you selected POP as the receiving server type, you will now see a page of options to specify the behavior of Evolution.

  1. You can choose from the following options:

    Check for new messages every ... minutes

    Select if you want Evolution to automatically check for new mail. Set how often to check.

    Leave messages on server

    Select if you want leave your mail on the server or delete it on the server when you download it to your computer. You can also set a period of time for which the messages will be kept on the server after they were downloaded.

    Disable support for all POP3 extensions

    Disabling POP3 extensions can help with old or misconfigured servers. Select if you have trouble receiving mail.

  2. Proceed to the next page.

15.2.4.3 USENET News Receiving Options

If you selected USENET News as the receiving server type, you will now see a page of options to specify the behavior of Evolution.

  1. You can choose from the following options:

    Check for new messages every ... minutes

    Select if you want Evolution to automatically check for new mail. Set how often to check.

    Apply filters to new messages in all folders

    Select if you want to apply filters to new messages.

    Show folders in short notations

    Abbreviate folder names, for example, comp.os.linux appears as c.o.linux.

    In the subscription dialog, show relative folder names

    Display only the name of the folder. For example, the folder evolution.mail would appear as evolution.

  2. Proceed to the next page.

15.2.4.4 Exchange Web Services Receiving Options

If you selected Exchange Web Services as the receiving server type, you will now see a page of options to specify the behavior of Evolution.

  1. You can choose from the following options:

    Check for new messages every ... minutes

    Select if you want Evolution to automatically check for new mail. Set how often to check.

    Check for new message in all folders

    Select if you want to check for new messages in all folders.

    Listen for server change notifications

    Select if you want Evolution to listen for change notifications. If you activate this option, Evolution will show you mail as it arrives. Therefore, you can usually deactivate Check for new messages every ... minutes.

    Apply filters to new messages in all folders

    Select if you want to apply filters to new messages.

    Check new messages for Junk contents

    Select if you want to check new messages for junk content, and whether to do so in all folders or only in the Inbox folder.

    Automatically synchronize remote mail locally

    Select this to download all your mail, so you can read it offline.

    Connection timeout (in seconds)

    Set maximum time to wait for an answer from the server.

    Cache offline address book

    If you provided an OAB URL in the prior step, you can select caching an address book. This will make the address book available when offline.

  2. Proceed to the next page.

15.2.4.5 Local Delivery Receiving Options

If you selected that you want to receive mail through Local Delivery, you will now see a page of options to specify the behavior of Evolution.

  1. Select Check for new messages every ... minutes if you want Evolution to automatically check for new mail. Set how often to check.

  2. Proceed to the next page.

15.2.4.6 MH-Format Mail Directories Receiving Options

If you selected that you want to receive mail through MH-Format Mail Directories, you will now see a page of options to specify the behavior of Evolution.

  1. Select Check for new messages every ... minutes if you want Evolution to automatically check for new mail. Set how often to check.

    Select Use the .folders summary file to use the .folders summary file.

  2. Proceed to the next page.

15.2.4.7 Maildir-Format Mail Directories Receiving Options

If you selected that you want to receive mail through Maildir-Format Mail Directories, you will now see a page of options to specify the behavior of Evolution.

  1. Select Check for new messages every ... minutes if you want Evolution to automatically check for new mail. Set how often to check.

    Select Apply filters to new messages in Inbox if you want to apply filters to new messages.

  2. Proceed to the next page.

15.2.4.8 Standard Unix Mbox Spool or Directory Receiving Options

If you selected that you want to receive mail through a Unix mbox Spool File or Directories, you will now see a page of options to specify the behavior of Evolution.

  1. Select Check for new messages every ... minutes if you want Evolution to automatically check for new mail. Set how often to check.

    Select Apply filters to new messages in Inbox if you want to apply filters to new messages.

  2. Select Store status headers in Elm/Pine/Mutt format to store status headers in a way compatible with Elm, Pine, and Mutt.

  3. Proceed to the next page.

15.2.5 Sending Mail

Now that you have entered information about how you plan to receive mail, Evolution needs to know about how you want to send it. Usually, a separate server configuration is necessary for this. Otherwise, this page will be skipped.

Select a server type from the Server Type list.

The following server types are available:

Sendmail:  Uses the Sendmail program to send mail from your system. Sendmail is more flexible, but is not as easy to configure, so you should select this option only if you know how to set up a Sendmail service.

SMTP:  Sends mail using a separate mail server. This is the most common choice for sending mail. If you choose SMTP, there are additional configuration options.

Procedure 15.1: SMTP Configuration
  1. Type the host address in the Server field.

    If you are not sure what your host address is, contact your system administrator or e-mail provider.

  2. Select if your server requires authentication.

    If you selected that your server requires authentication, you need to provide the following information:

    1. Choose a security setting supported by your mail server. For security reasons, avoid using No Encryption.

    2. Select your authentication type in the Authentication list.

      or

      Click Check for Supported Types to have Evolution check for supported types. Then choose one of the options without a strikeout.

      Some servers do not announce the authentication mechanisms they support. Therefore, clicking this button is not a guarantee that the shown mechanisms actually work.

    3. Type your user name in the Username field.

  3. Proceed to the next page.

15.2.6 Final Steps

Now that you have finished the e-mail configuration process, you need to give the account a name. The name can be any name you prefer. Type your account name on the Name field. Proceed to the next page and confirm your changes.

Depending on your configuration, you may now be asked for your e-mail passwords and whether you want to save them or want to always enter them when starting Evolution.

The Evolution main window will then open for the first time.

15.3 Using Evolution

Now that the first-run configuration has finished, you are ready to begin using Evolution. This section sums up the most important parts of the user interface.

Evolution Window
Figure 15.1: Evolution Window
Menu Bar

The menu bar gives you access to nearly all of the features of Evolution.

Folder List

The folder list gives you a list of the available folders for each account. To see the contents of a folder, click the folder name. The contents are displayed in the e-mail list.

Toolbar

The toolbar gives you fast and easy access to the frequently used features in each component.

Search Bar

The search bar lets you search for e-mails. You can filter e-mails, contacts, and calendar entries and tasks using different criteria: a label, a search term, and an account or folder. The Search bar can also save frequently used searches to a search folder.

Message List

The message list displays a list of e-mails that you have received. To view an e-mail in the preview pane, select the e-mail.

Shortcut Bar

The shortcut bar at the left lets you switch between folders and program components.

Statusbar

The statusbar periodically displays a message, or informs you about the progress of a task, such as sending e-mail.

On the far left, you will find the Online/Offline indicator. Click the Online/Offline indicator to switch between being using Evolution in online or offline mode.

Preview Pane

The preview pane displays the contents of the e-mails that are selected in the e-mail list.

15.3.1 The Menu Bar

The menu bar’s contents always provide all the possible actions for any view of your data.

File:  Anything related to a file or to the operations of the application usually falls under this menu, such as creating things, saving them to disk, printing them, and quitting the program itself.

Edit:  Contains tools to edit text and most configuration options.

View:  Allows configuring the appearance of Evolution.

Message:  Contains actions that can be applied to a message.

Folder:  Contains actions that can be performed on folders.

Search:  Lets you search for messages, or phrases within a message. You can also see previous searches you have made.

Help:  Opens the Evolution application help.

15.3.2 The Shortcut Bar

The shortcut bar is the column on the left side of the main window. At the top, there is a list of folders for the selected Evolution component. The buttons at the bottom are shortcuts to the individual components, such as Mail and Contacts.

The folder list organizes your e-mail, calendars, contact lists, and task lists in a tree. Most people find one to four folders at the base of the tree, depending on the component and their system configuration. Each Evolution component has at least one, called On This Computer, for local information. For example, the folder list for the e-mail component shows all your e-mail accounts, local folders, and search folders.

If you receive large amounts of e-mail, you need additional ways to organize it. In Evolution, you can create own e-mail folders, address books, calendars, task lists, or memo lists.

15.3.2.1 Creating a folder

To create a new folder:

  1. Click File › New › Mail Folder.

  2. Type the name of the folder in the Folder Name field.

  3. Select the location of the new folder.

  4. Click Create.

15.3.2.2 Folder Management

Right-click a folder or subfolder to display a menu with the following options:

Mark All Messages As Read Marks all the messages in the folder as read.

New Folder Creates a new folder or subfolder in the same location.

Copy Folder To Copies the folder to a different location. When you select this item, Evolution offers a choice of locations to copy the folder to.

Move Folder To Moves the folder to another location.

Delete Deletes the folder and all contents.

Rename Lets you change the name of the folder.

Refresh:  Refreshes the folder.

Properties:  Shows the number of total and unread messages in a folder.

You can also rearrange folders and messages by dragging and dropping them.

Any time new e-mail arrives in an e-mail folder, that folder label is displayed in bold text, along with the number of new messages in that folder.

15.3.3 Using E-Mail

The e-mail component of Evolution has the following standout features:

  • It supports multiple e-mail sources from many protocols.

  • It lets you guard your privacy with encryption.

  • It can speedily handle large amounts of e-mail.

  • Search folders allow you to come back to often-used searches.

Below is a summary of the user interface elements of the e-mail window.

Message List

The message list displays all the e-mails that you have. This includes all your read and unread messages and e-mail that is flagged to be deleted. With the Show drop-down box above the message you can filter the message list view using predefined and custom labels.

Preview Pane

This is where your e-mail is displayed.

If you find the preview pane too small, you can resize the pane, enlarge the whole window, or double-click the message in the message list to have it open in a new window. To change the size of a pane, drag the divider between the two panes.

As with folders, you can right-click messages in the message list and get a menu of possible actions. This includes moving or deleting them, creating filters or search folders based on them, and marking them as junk mail.

Actions related to e-mail, like Reply and Forward, appear as buttons in the toolbar and are also located in the right-click menu.

Templates

Evolution allows you to create and edit message templates that you can use at any time to send mail with the same pattern.

15.3.4 Calendaring

To begin using the calendar, click Calendars in the shortcut bar. By default, the calendar shows today’s schedule on a ruled background. At the upper right, there is a Tasks list, where you can keep a list of tasks separate from your calendar appointments. Below that, there is a list for memos.

Appointment List

The appointment list displays all your scheduled appointments.

Month Pane

The month pane is a small view of a calendar month. You can also select a range of days in the month pane to display a custom range of days in the appointment list.

Tasks

Tasks are distinct from appointments because they generally do not have times associated with them. You can see a larger view of your task list by clicking Tasks in the shortcut bar.

Memos

Memos, like Tasks, do not have times associated with them. You can see a larger view of your Memo list by clicking Memos in the shortcut bar.

15.3.5 Managing Contacts

To use the contacts component, click Contacts in the shortcut bar. The Evolution contacts component can handle all of the functions of an address book or phone book.

It does, however, also do more than a paper book. To share your address book on a network, you can use LDAP directories. To create a new contact entry, right-click an e-mail address or double-click an empty space in the right pane. You can also search contacts using the search bar.

By default, the display shows all your contacts in alphabetical order, in a card-based view. You can select other views from the View menu.

15.4 For More Information

Get more information about Evolution from the application help available via F1.

Find more information on the project home page https://wiki.gnome.org/Apps/Evolution.

16 Empathy: Instant Messaging

  • Filename: apps_empathy.xml
  • ID: cha.gnome.empathy

Empathy is an instant messaging (IM) client that allows you to connect to multiple accounts simultaneously. Chat live with your contacts in one tabbed interface, regardless of which IM system they use. Empathy uses Telepathy for protocol support.

Empathy supports the following instant messaging protocols: Google Talk (Jabber/XMPP), MSN, IRC, Salut, AIM, Facebook, Yahoo!, Gadu Gadu, Groupwise®, ICQ and QQ. (The supported protocols depend on installed Telepathy Connection Manager components.)

In the following, learn how to set up Empathy and how to communicate with your contacts.

16.1 Starting Empathy

To start Empathy, select Applications › Internet › Empathy.

16.2 Configuring Accounts

To use Empathy, you must already have an account for the messaging service you want to use. For example, to use Empathy to chat via AIM, you must first have an AIM account.

Procedure 16.1: Adding and Editing Accounts in Empathy
  1. To start Empathy, select Applications › Internet › Empathy.

    If you start Empathy for the first time, a message appears, prompting you to configure an account.

  2. Enter your account data. The Messaging and VoIP Accounts dialog shows the accounts that have been configured so far.

  3. To add another account:

    1. In the Messaging and VoIP Accounts dialog, click the plus icon.

    2. Choose the type of account you want to configure, enter your user ID and password for the account and click Add. The dialog to add or modify accounts differs for each type of account, depending on what setup options are available for that account.

  4. To enter or modify connection data for an account:

    1. Select the account and click Edit Connection Parameters › Advanced.

    2. Enter a server name and a port to use for the connection. Specify additional parameters, such as encryption options, if necessary. If you are unsure which parameters to use, refer to your system administrator or messaging service.

    3. Click Apply to confirm your changes.

To go online with your account, turn the account switch on. When prompted for your password, enter it.

To disable the account, turn the switch off. If you are finished with the configuration of your accounts, close the Messaging and VoIP Accounts dialog.

16.3 Managing Contacts

Use the Contact List to manage your contacts. You can add and remove contacts and organize them in groups, so they are easy to find.

Procedure 16.2: Adding Contacts
  1. To add a contact, click Contacts › Add Contacts.

  2. Select the Account for which you want to add a contact.

  3. As Identifier, enter the name or user ID of the person you want to add.

  4. By default, Alias will show the same entry, but you can enter a different name or nickname for the contact person here.

    As soon as you start typing into the Identifier text box, the dialog will also show any groups that you have already defined.

  5. To add the new contact to a group, activate the respective group's check box.

  6. To create a new group, type a group name into the text box next to Add Group and click Add Group.

  7. Click Add to confirm your changes and to close the dialog.

In case the groups or the newly added contacts are not displayed in the Contact List, check the Empathy preferences by clicking Preferences › General. Activate Show offline contacts and Show groups to make all contacts and groups appear in the Contact List.

To remove a contact from the list, right-click the name of that contact, select Remove and confirm your choice.

16.4 Chatting with Friends

To chat with other participants, you need to be connected to the Internet. After a successful login, you are usually marked as Available in the Contact List, and thus visible to others. To change your status, click the drop-down box at the top of the Contact List and select another option.

To open a chat session, double-click a contact name in the Contact List. The chat screen opens. Type your message, then press Enter to send.

If you open more than one chat session, the new session appears as a tab in the existing chat window. To see all messages of a session and to be able to write a reply, click the tab of that session. To see multiple session side by side, use the mouse to drag a tab out of the window. A second window will open.

To close a chat session, close the tab or window for it.

16.5 For More Information

This chapter explained the Empathy options you need to know about to set up Empathy and communicate with your contacts. It does not explain all features and options available. For more information, open Empathy, then click Help.

For updates about new features and for the latest information, refer to the home page of the project at https://wiki.gnome.org/Apps/Empathy.

17 Ekiga: Using Voice over IP

  • Filename: apps_ekiga.xml
  • ID: cha.ekiga
Abstract

Ekiga is an application you can use for making phone calls via Voice over IP (VoIP), for video conferencing and for instant messaging.

Note
Note: Ekiga May Not Be Installed

Before proceeding, make sure that the package ekiga is installed.

Before starting, make sure that the following requirements are met:

  • Your sound card is properly configured.

  • A headset or a microphone and speakers are connected to your computer.

  • For dialing in to regular phone networks, a SIP account is required. SIP (Signaling protocol for Internet Telephony) is the protocol used to establish sessions for audio and video conferencing or call forwarding.

    There are many VoIP providers all over the world. One provider is the Ekiga project itself, go to https://ekiga.im to learn more.

  • For video conferencing: A Web cam is connected to your computer.

17.1 Starting Ekiga

Start Ekiga by clicking Applications › Internet › Ekiga Softphone.

17.2 Configuring Ekiga

On first start, Ekiga opens a configuration assistant that requests all data needed to configure Ekiga. Proceed as follows:

  1. Click Forward.

  2. Enter your full name (name and surname). Click Forward.

  3. Enter your ekiga.net account data or choose not to register with http://www.ekiga.net. Click Forward.

  4. Enter your Ekiga Call Out Account data or choose not to register with http://www.ekiga.net. Click Forward.

  5. Set your connection type and speed. Click Forward.

  6. Configure the audio devices to use by choosing the audio ringing, output and input device driver. In general, you can keep the Default setting. Click Forward.

  7. Choose a video input device, if available. Click Forward.

  8. Check the summary of your settings and apply them.

  9. If registration fails after making changes to your configuration, restart Ekiga.

Ekiga allows you to maintain multiple accounts. To configure an additional account, proceed as follows:

  1. Open Edit › Accounts.

  2. Choose Accounts › Add <account type>. If you are unsure, select Add a SIP Account.

  3. Enter the Registrar to which you have registered. This is usually an IP address or a host name that will be given to you by your Internet Telephony Service Provider. Enter User, and Password according to the data provided by your provider.

  4. Make sure Enable account is activated and leave the configuration dialog with OK. The account is displayed in the Ekiga main window, including its Status, which should change to Registered.

17.3 The Ekiga User Interface

The user interface has different modes. To switch between views, use the toolbar. The first mode is Contacts, the second is Dialpad and the last one is Call History. Click the camera icon to open the Call Window. It displays images from your local Web cam (or from a remote Web cam during a call).

Ekiga User Interface
Figure 17.1: Ekiga User Interface

By default, Ekiga opens in the Contacts mode. This view shows you a local address book which lets you quickly open connections to often-used numbers.

Many of the functions of Ekiga are available with key combinations. Table 17.1, “Key Combinations for Ekiga” summarizes the most important ones.

Table 17.1: Key Combinations for Ekiga

Key Combination

Description

CtrlO

Initiate a call with the current number.

Esc

Hang up.

CtrlN

Add a contact to your address book.

CtrlB

Open the Address Book dialog.

H

Hold the current call.

T

Transfer the current call to another party.

M

Suspend the audio stream of the current call.

P

Suspend the video stream of the current call.

CtrlW

Close the Ekiga user interface.

CtrlQ

Quit Ekiga.

CtrlE

Start the account manager.

CtrlJ

Activate Call Panel on the main user interface.

Ctrl+

Zoom in to the picture from the Web cam.

Ctrl-

Zoom out on the picture from the Web cam.

Ctrl0

Return to the normal size of the Web cam display.

F11

Use full screen for the Web cam.

17.4 Making a Call

After Ekiga is properly configured, making a call is easy.

  1. Switch to the Dialpad mode.

  2. Enter the SIP address of the party to call at the bottom of the window. The address should look like:

    • for direct local calls: sip:username@domainname or username@hostname

    • sip:username@domainname or userid@sipserver

  3. Click Call or press CtrlO and wait for the other party to pick up the phone.

  4. To end the call, click Hang up or press Esc.

If you need to tweak the sound parameters, click Edit › Preferences.

17.5 Answering a Call

Ekiga can receive calls in two different ways. First, it can be called directly with sip:user@host, or via SIP provider. Most SIP providers enable you to receive calls from a normal land-line to your VoIP account. Depending on the mode in which you use Ekiga, there are several ways in which you are alerted to an incoming call:

Normal Application

Incoming calls can only be received and answered if Ekiga is already started. You can hear the ring sound on your headset or your speakers. If Ekiga is not started, the call cannot be received.

Panel Applet

Normally, the Ekiga panel applet runs silently without giving any notice of its existence. This changes when a call comes in. The main window of Ekiga opens and you hear a ringing sound on your headset or speakers.

Once you have noticed an incoming call, click Accept to answer the call then start talking. If you do not want to accept this call, click Reject. It is also possible to transfer the call to another SIP address.

17.6 Using the Address Book

Ekiga can manage your SIP contacts. All of the contacts are displayed in the Contacts tab, shown in the main window after start-up. To add a contact or a new contact group, select Chat › Add Contact.

If you want to add a new group, enter the group name into the bottom text box and click Add. The new group is then added to the group list and preselected.

The following entries are required for a valid contact:

Name

Enter the name of your contact. This may be a full name, but you can also use a nickname here.

Address

Enter a valid SIP address for your contact.

Groups

If you have many contacts, add your own groups.

To call a contact from the address book, double-click the contact. The call is initiated immediately.

17.7 For More Information

The official home page of Ekiga is http://www.ekiga.org/. This site offers answers to frequently asked questions and more detailed documentation.

For information about the support of the H323 teleconferencing protocol in Linux, see http://www.voip-info.org/wiki/view/H.323. This is also a good starting point when searching for projects supporting VoIP.

To set up a private telephone network, you might be interested in the PBX software Asterisk http://www.asterisk.org/. Find information about it at http://www.voip-info.org/wiki-Asterisk.

Part V Graphics and Multimedia

18 GIMP: Manipulating Graphics

GIMP (the GNU Image Manipulation Program) is a program for creating and editing raster graphics. In most aspects, its features are comparable to those of Adobe* Photoshop* and other commercial programs. Use it to resize and retouch photographs, design graphics for Web pages, create covers for your custom CDs, or almost any other graphics project. It meets the needs of both amateurs and professionals.

19 GNOME Videos

GNOME Videos is the default movie player. GNOME Videos provides the following multimedia features:

20 Brasero: Burning CDs and DVDs

Brasero is a GNOME program for writing data and audio CDs and DVDs. Start the program from the main menu by clicking Applications › Sound & Video › Brasero.

The following sections are a quick introduction on how to create your own CD or DVD.

18 GIMP: Manipulating Graphics

  • Filename: apps_gimp.xml
  • ID: cha.gimp
Abstract

GIMP (the GNU Image Manipulation Program) is a program for creating and editing raster graphics. In most aspects, its features are comparable to those of Adobe* Photoshop* and other commercial programs. Use it to resize and retouch photographs, design graphics for Web pages, create covers for your custom CDs, or almost any other graphics project. It meets the needs of both amateurs and professionals.

GIMP is an extremely complex program. Only a small range of features, tools, and menu items are discussed in this chapter. See Section 18.8, “For More Information” for ideas of where to find more information about the program.

18.1 Graphics Formats

There are two main types of digital graphics: raster and vector. GIMP is intended for working with raster graphics, which are most often used for digital photographs or scanned images.

Raster Images.  A raster image is a collection of pixels: Small blocks of color that create an entire image when put together. High resolution images contain a large number of pixels. Because of this, such image files can easily become quite large. It is not possible to increase the size of a raster image without losing quality.

GIMP supports most common formats of raster graphics, like JPEG, PNG, GIF, BMP, TIFF, PSD, and more.

Vector Images.  Unlike raster images, vector images do not store information about individual pixels. Instead, they use geometric primitives such as points, lines, curves, and polygons. Vector images can be scaled very easily. Depending on their content, vector image files can both be very small or very large. However, their file size is usually independent of their display size.

The disadvantage of vector images is that they are not good at representing complex images with many colors such as photographs. There are many specialized applications for vector graphics, for example Inkscape. GIMP has very limited support for vector graphics. For example, GIMP can open and rasterize vector graphics in SVG format or work with vector paths.

GIMP supports only the most common color spaces:

  • RGB images with 8 bits per channel. This equals 24 bits per pixel in RGB images without an alpha channel (transparency). With an alpha channel, that equals 32 bits per pixel.

  • Grayscale images with 8 bits per pixel.

  • Indexed images with up to 255 colors.

Many high-end digital cameras produce image files with color depths above 8 bits per channel. If you import such an image into GIMP, you will lose some color information. GIMP also does not support a CMYK color mode for professional printing.

18.2 Starting GIMP

To start GIMP, select Applications › Graphics › GIMP.

18.3 User Interface Overview

By default, GIMP shows three windows. The toolbox, an empty image window with the menu bar, and a window containing several docked dialogs. The windows can be arranged on the screen as you need them. If they are no longer needed, they can also be closed. Closing the image window when it is empty quits the application.

In the default configuration, GIMP saves your window layout when you quit. Dialogs left open reappear when you next start the program.

If you want to combine all windows of GIMP, activate Windows › Single-Window Mode.

18.3.1 The Image Window

If there is currently no image open, the image window is empty, containing only the menu bar and the drop area, which can be used to open any file by dragging and dropping it there. Every new, opened, or scanned image appears in its own window. If there is more than one open image, each image has its own image window. There is always at least one image window open.

In Single-Window Mode, all image windows are accessible from a tab bar at the top of the window.

The menu bar at the top of the window provides access to all image functions. You can also access the menu by right-clicking the image or clicking the small arrow button in the top left corner of the rulers.

The File menu offers the standard file operations, such as New, Open, Save, Print and Close. Quit quits the application.

With the items in the View menu, control the display of the image and the image window. New View opens a second display window of the current image. Changes made in one view are reflected in all other views of that image. Alternate views are useful for magnifying a part of an image for manipulation while seeing the complete image in another view. Adjust the magnification level of the current window with Zoom. When Fit Image in Window is selected, the image window is resized to fit the current image display exactly.

18.3.2 The Toolbox

The toolbox contains drawing tools, a color selector, and a freely configurable space for options pages. If you accidentally close the toolbox, you can reopen it by clicking Tools › New Toolbox.

To find out what a particular tool does, hover over its icon. At the very top, there is a drop area which can be used to open any image file by simply dragging and dropping it there.

The Toolbox
Figure 18.1: The Toolbox

The current foreground and background color are shown in two overlapping boxes. The default colors are black for the foreground and white for the background. Swap the foreground and background color with the bent arrow icon to the upper right of the boxes. Use the black and white icon to the lower left to reset the colors to the default. Click the box to open a color selection dialog.

Under the toolbox, a dialog shows options for the currently selected tool. If it is not visible, open it by double-clicking the icon of the tool in the toolbox.

18.3.3 Layers, Channels, Paths, Undo

Layers shows the different layers in the current image and can be used to manipulate the layers. Information is available in Section 18.6.6, “Layers”.

Channels shows the color channels of the current image and can manipulate them.

Paths are a vector-based method of selecting parts of an image. They can also be used for drawing. Paths shows the paths available for an image and provides access to path functions. Undo shows a limited history of modifications made to the current image. Its use is described in Section 18.6.5, “Undoing Mistakes”.

18.4 Getting Started

Although GIMP can be a bit overwhelming for new users, most quickly find it easy to use after they work out a few basics. Crucial basic functions are creating, opening, and saving images.

18.4.1 Creating a New Image

  1. To create a new image, select File › New. This opens a dialog in which you can make settings for the new image.

  2. If desired, select a predefined setting called a Template.

    Note
    Note: Custom Templates

    To create a custom template, select Windows › Dockable Dialogs › Templates and use the controls offered by the window that opens.

  3. In the Image Size section, set the size of the image to create in pixels or another unit. Click the name of the unit to select another unit from the list of available units.

  4. (Optional) To set a different resolution, click Advanced Options, then change the value for Resolution.

    The default resolution of GIMP is usually 72 pixels per inch. This corresponds to a common screen display and is sufficient for most Web page graphics. For print images, use a higher resolution, such as 300 pixels per inch.

    In Color space, select whether the image should be in color (RGB) or Grayscale. For detailed information about image types, see Section 18.6.7, “Image Modes”.

    In Fill With select the color the image is filled with. You can choose between Foreground Color and Background Color set in the toolbox, White or Transparency for a transparent image. Transparency is represented by a gray checkerboard pattern.

  5. When the settings meet your needs, click OK.

18.4.2 Opening an Existing Image

To open an existing image, select File › Open.

In the dialog that opens, select the desired file and click Open.

18.5 Saving and Exporting Images

GIMP makes a distinction between saving and exporting images.

Saving an Image.  The image is stored with all its properties in a lossless format. This includes, for example, layer and path information. This means that repeatedly opening and saving the image will neither degrade its quality nor how well it can be edited.

To save an image, use File › Save or File › Save as. To be able to store all properties, only the native format of GIMP is allowed in this mode: the XCF format.

Exporting an image.  The image is stored in a format in which some properties can be lost. For example, most image formats do not support layers. When exporting, GIMP will often tell you which properties will be lost and ask you to decide how to proceed.

To export an image, use File › Overwrite or File › Export As. Below is a selection of the most common file formats that GIMP can export to:

JPEG

A common format for photographs and Web page graphics without transparency. Its compression method enables reduction of file sizes, but information is lost when compressing. It may be a good idea to use the preview option when adjusting the compression level. Levels of 85% to 75% often result in an acceptable image quality with reasonable compression. Repeatedly opening a JPEG and then saving can quickly result in poor image quality.

GIF

Although very popular in the past for graphics with transparency, GIF is less often used now. GIF is also used for animated images. The format can only save indexed images. See Section 18.6.7, “Image Modes” for information about indexed images. The file size can often be quite small if only a few colors are used.

PNG

With its support for transparency, lossless compression, and good browser support, PNG is the preferred format for Web graphics with transparency. An added advantage is that PNG offers partial transparency, which is not offered by GIF. This enables smoother transitions from colored areas to transparent areas (antialiasing). It also supports the full RGB color space which makes it usable for photos. However, it cannot be used for animations.

18.6 Editing Images

GIMP provides several tools for making changes to images. The functions described here are those most interesting for smaller edits.

18.6.1 Changing the Size of an Image

After an image is scanned or a digital photograph is loaded from the camera, it is often necessary to modify the size for display on a Web page or for printing. Images can easily be made smaller either by scaling them down or by cutting off parts of them.

Enlarging an image is much more problematic. Because of the nature of raster graphics, quality is lost when an image is enlarged. It is recommended to keep a copy of your original image before scaling or cropping.

18.6.1.1 Cropping an Image

  1. Select the crop tool from the toolbox (the paper knife icon) or click Tools › Transform Tools › Crop.

  2. Click a starting corner and drag to outline the area to keep. A rectangle showing the crop area will appear.

  3. To adjust the size of the rectangle, move your mouse pointer above any of the rectangle's sides or corners, then click and drag to resize as desired. If you want to adjust both width and height of the rectangle, use a corner. To adjust only one dimension, use a side. To move the whole rectangle to a different position without resizing, click anywhere near its center and drag to the desired position.

  4. When you are satisfied with the crop area, click anywhere inside to crop the image or press Enter. To cancel the cropping, click anywhere outside the crop area.

18.6.1.2 Scaling an Image

  1. Select Image › Scale Image to change the overall size of an image.

  2. Select the new size by entering it in Width or Height.

    To change the proportions of the image when scaling (this distorts the image), click the chain icon to the right of the fields to break the link between them. When those fields are linked, all values are changed proportionately. Adjust the resolution with X resolution and Y resolution.

    The Interpolation option controls the quality of the resulting image. The default Cubic interpolation method usually is a good standard to use.

  3. When you are finished, click Scale.

18.6.1.3 Changing the Canvas Size

The canvas is the entire visible area of an image. Canvas and image are independent from each other. If the canvas is smaller than the image, you can only see part of the image. If the canvas is larger, you see the original image with extra space around it.

  1. Select Image › Canvas Size.

  2. In the dialog that opens, enter the new size. To make sure the dimensions of the image stay the same, click the chain icon.

  3. After adjusting the size, determine how the existing image should be positioned in comparison to the new size. Use the Offset values or drag the box inside the frame at the bottom.

  4. When you are finished, click Resize.

18.6.2 Selecting Parts of Images

It is often useful to perform an image operation on only part of an image. To do this, the part of the image with which you want to work must be selected. Areas can be selected using the selection tools available in the toolbox, using the quick mask, or combining different options. Selections can also be modified with the items under Select. The selection is outlined with a dashed line, called marching ants.

18.6.2.1 Using the Selection Tools

The main selection tools are easy to use. The more complicated paths tool is not described here.

To determine whether a new selection should replace, be added to, be subtracted from, or intersect with an existing selection, use the Mode row in the tool options.

Rectangle Select

This tool can be used to select rectangular or square areas. To select an area with a fixed aspect ratio, width, height or size, activate the Fixed option and choose the relevant mode in the Tool Options dialog. To create a square, hold Shift while selecting a region.

Ellipse Select

Use this to select elliptical or circular areas. The same options are available as with the rectangular selection. To create a circle, hold Shift while selecting a region.

Free Select (Lasso)

With this tool, you can create a selection based on a combination of freehand drawing and polygonal segments. To draw a freehand line, drag the mouse over the image with the left mouse button pressed. To create a polygonal segment, release the mouse button where the segment should start and press it again where the segment should end. To complete the selection, hover the pointer above the starting point and click inside the circle.

Fuzzy Select (Magic Wand)

This tool selects a continuous region based on color similarities. Set the maximum difference between colors in the tool options dialog in Threshold. By default, the selection is based only on the active layer. To base the selection on all visible layers, check Sample merged.

Select by Color

With this tool, select all the pixels in the image with the same or a similar color as the clicked pixel. The maximum difference between colors can be set in the tool options dialog in Threshold. The important difference between this tool and Fuzzy Select is that Fuzzy Select works on continuous color areas while Select by Color selects all pixels with similar colors in the whole image regardless of their position.

Scissors

Click a series of points in the image. As you click, the points are connected based on color differences. Click the first point to close the area. Convert it to a regular selection by clicking inside it.

Foreground Selection

The Foreground Selection tool lets you semi-automatically select an object in a photograph with minimal manual effort.

To use the Foreground Selection tool, follow these steps:

  1. Activate the Foreground Selection tool by clicking its icon in the Toolbox or choosing Tools › Selection Tools › Foreground Select from the menu.

  2. Roughly select the foreground object you want to extract. Select as little as possible from the background but include the whole object. At this point, the tool works like the Fuzzy Select tool.

    When you release the mouse button, the deselected part of the image is covered with a dark blue mask.

  3. Draw a continuous line through the foreground object going over colors which will be kept for the extraction. Do not paint over background pixels.

    When you release the mouse button, the entire background is covered with a dark blue mask. If parts of the object are also masked, paint over them. The mask will adapt.

  4. When you are satisfied with the mask, press Enter. The mask will be converted to a new selection.

18.6.2.2 Using the Quick Mask

The quick mask is a way of selecting parts of an image using the paint tools. A good way to use it is to first create a rough selection using the Scissors or Free Select tool. Then start using the Quick Mask:

  1. To activate the Quick Mask, in the lower left corner of the image window, click the icon with the dashed box. The Quick Mask icon now changes to a red box.

    The Quick Mask highlights the deselected parts of the image with a red overlay. Areas appearing in their normal color are selected.

    Note
    Note: Changing the Color of the Mask

    To use a different color for displaying the quick mask, right-click the quick mask button then select Configure Color and Opacity from the menu. Click the colored box in the dialog that opens to select a new color.

  2. To modify the selection, use the paint tools.

    Painting with white selects the painted pixels. Painting with black deselects pixels. Shades of gray (colors are treated as shades of gray) create a partial selection. Partial selections allow a smooth transition between selected and deselected areas.

  3. When you are finished, return to the normal selection view by clicking the icon in the lower left corner of the image window. The selection is then displayed with the marching ants.

18.6.3 Applying and Removing Color

Most image editing involves applying or removing color. By selecting a part of the image, you can limit where color can be applied or removed. When you select a tool and move the mouse pointer onto an image, the appearance of the mouse pointer changes to reflect the chosen tool.

With many tools, an icon of the current tool is shown along with the arrow. For paint tools, an outline of the current brush is shown, allowing you to see exactly where you will be painting in the image and how large of an area will be painted.

18.6.3.1 Selecting Colors

The GIMP toolbox always shows two color swatches. The foreground color is used by the paint tools. The background color is used much more rarely, but it can easily be switched to become the foreground color.

  1. To change the color displayed in a swatch, click the swatch. A dialog with five tabs opens.

  2. These tabs provide different color selection methods. Only the first tab, shown in Figure 18.2, “The Basic Color Selector Dialog”, is described here. The new color is shown in Current. The previous color is shown in Old.

    The Basic Color Selector Dialog
    Figure 18.2: The Basic Color Selector Dialog

    The easiest way to select a color is by using the colored areas in the boxes to the left. In the narrow vertical bar, click a color similar to the desired color. The larger box to the left then shows available nuances. Click the desired color. It is then shown in Current.

    The arrow button to the right of Current allows saving colors. Click the arrow to copy the current color to the history. A color can then be selected by clicking it in the history.

    A color can also be selected by directly entering its hexadecimal color code in HTML Notation.

    The color selector defaults to selecting a color by hue. To select by saturation, value, red, green, or blue, select the corresponding radio button to the right. The sliders and number fields can also be used to modify the currently selected color. Experiment a bit to find out what works best for you.

  3. When you are finished, click OK.

To select a color that already exists in your image, use the eye dropper tool. With the tool options, set whether the foreground or background color should be selected.

18.6.3.2 Painting and Erasing

To paint and erase, use the tools from the toolbox. There are a number of options available to fine-tune each tool. Pressure sensitivity options apply only when a pressure-sensitive graphics tablet is used.

The pencil, brush, airbrush, and eraser work much like their real-life equivalents. The ink tool works like a calligraphy pen. Paint by clicking and dragging. The bucket fill is a method of coloring areas of an image. It fills based on color boundaries in the image. Adjusting the threshold modifies its sensitivity to color changes.

18.6.3.3 Adding Text

To add text, use the text tool. Use the tool options to select the desired font and text properties. Click into the image, then start writing.

The text tool creates text in a special layer. To work with the image after adding text, read Section 18.6.6, “Layers”. When the text layer is active, it is possible to modify the text by clicking in the image to reopen the entry dialog.

18.6.3.4 Retouching Images—The Clone Tool

The clone tool is ideal for retouching images. It enables you to paint in an image using information from another part of the image. If desired, it can instead take information from a pattern.

When retouching, use a small brush with soft edges. In this way, the modifications can blend better with the original image.

To select the source point in the image, press and hold Ctrl while clicking the desired source point. Then paint with the tool. When you move the cursor while painting, the source point, marked by a cross, moves as well.

If the Alignment is set to None (the default setting), the source resets to the original when you release the left mouse button.

18.6.4 Adjusting Color Levels

Images often need a little adjusting to get ideal print or display results.

  1. Select Colors › Levels. A dialog opens for controlling the levels in the image.

  2. Good results can usually be obtained by clicking Auto. To make manual adjustments to all channels, use the dropper tools in All Channels to pick areas in the image that should be black, neutral gray, and white.

    To modify an individual channel, select the desired channel in Channel. Then drag the black, white, and middle markers in the slider in Input Levels. You can also use the dropper tools to select points in the image that should serve as the white, black, and gray points for that channel.

    If Preview is checked, the image window shows a preview of the image with the modifications applied.

  3. When you are finished, click OK.

18.6.5 Undoing Mistakes

Most modifications made in GIMP can be undone. To view a history of modifications, use the undo dialog included in the default window layout or open one from the image window menu with Windows › Dockable Dialogs › Undo History.

The dialog shows a base image and a series of editing changes that can be undone. Use the buttons to undo and redo changes. In this way, you can often work back to the base image.

You can also undo and redo changes using Undo and Redo from the Edit menu. Alternatively, use the shortcuts CtrlZ and CtrlY.

18.6.6 Layers

Layers are a very important aspect of GIMP. By drawing parts of your image on separate layers, you can change, move, or delete those parts without damaging the rest of the image.

To understand how layers work, imagine an image created from a stack of transparent sheets. Different parts of the image are drawn on different sheets. The stack can be arranged and sorted. Individual layers or groups of layers can shift position, moving sections of the image to other locations. New sheets can be added and others can be removed or made invisible.

Use the Layers dialog to view the available layers of an image. The text tool automatically creates special text layers when used. The active layer is selected. The buttons at the bottom of the dialog offer several functions. More are available in the menu opened when a layer is right-clicked in the dialog. The two icon spaces before the image name are used for toggling image visibility (eye icon when visible) and for linking layers. Linked layers are marked with the chain icon and moved as a group.

18.6.7 Image Modes

GIMP has three image modes:

  • RGB is a normal color mode and is the best mode for editing most images.

  • Grayscale is used for black-and-white images.

  • Indexed mode limits the colors in the image to a set number. The maximum number of colors in this mode is 255. It is mainly used for GIF images.

If you need an indexed image, it is normally best to edit the image in RGB, then convert to indexed right before exporting. If you export to a format that requires an indexed image, GIMP offers to index the image when exporting.

18.6.8 Special Effects

GIMP includes a wide range of filters and scripts for enhancing images, adding special effects to them or making artistic manipulations. They are available in Filters. Experimenting is the best way to find out what is available.

18.7 Printing Images

To print an image, select File › Print from the image menu. If your printer is configured in the system, it should appear in the list. You can configure printing options on Page Setup and Image Settings tabs.

The Print Dialog
Figure 18.3: The Print Dialog

When you are satisfied with the settings, click Print. Cancel aborts printing.

18.8 For More Information

The following resources are very useful for users of GIMP. They contain much more information about GIMP than this chapter. If you want to use GIMP for more advanced tasks, you should not miss these resources.

  • http://www.gimp.org is the official home page of The GIMP. News about GIMP and related software are regularly posted on the front page.

  • Help provides access to the internal help system including the extensive GIMP User Manual. The package gimp-help needs to be installed. This documentation is also available online in HTML and PDF formats at http://docs.gimp.org. Translations into many languages are available.

  • A collection of many interesting GIMP tutorials is maintained at http://www.gimp.org/tutorials/. It contains basic tutorials for beginners and tutorials for advanced or expert users.

  • Printed books about GIMP are published regularly. You will find a selection of the best ones with short annotations at http://www.gimp.org/books/.

  • GIMP functionality can be extended with scripts and plug-ins. Many such scripts and plug-ins are distributed in the GIMP package, but others can be downloaded from the Internet. At http://registry.gimp.org/, you will find a database of GIMP scripts and plug-ins.

You can also use mailing lists or IRC channels to ask questions about GIMP. Always try to find answers in the documentation mentioned above or in mailing list archives before asking your question. The time of experienced users present on GIMP lists and channels is limited. Be polite and patient. It may take some time before your question is answered.

  • There are several mailing lists about GIMP. You will find them at http://www.gimp.org/mail_lists.html. The GIMP User list is the most appropriate place to ask user questions.

  • There is a whole IRC network dedicated to GIMP and GNOME desktop environment—GIMPNet. You can connect to GIMPNet with your favorite IRC client by pointing it at the irc.gimp.org server. The #gimp-users channel is the right place to ask question about using GIMP. If you want to listen to developer's discussions, join the #gimp channel.

19 GNOME Videos

  • Filename: apps_totem.xml
  • ID: cha.gnome.totem

GNOME Videos is the default movie player. GNOME Videos provides the following multimedia features:

  • Support for a variety of video and audio files

  • A variety of zoom levels and aspect ratios, and a full screen view

  • Seek and volume controls

  • Playlists

  • Complete keyboard navigation

To start GNOME Videos, click Applications › Sound & Video › Videos.

19.1 Using GNOME Videos

When you start GNOME Videos, the following window is displayed.

GNOME Videos Start-Up Window
Figure 19.1: GNOME Videos Start-Up Window

19.1.1 Opening a Video or Audio File

  1. Click Videos › Open.

  2. Select the files you want to open, then click Add

You can also drag a file from another application (such as a file manager) to the GNOME Videos window. GNOME Videos opens the file and plays the movie or song. GNOME Videos displays the title of the movie or song beneath the display area and in the titlebar of the window.

Note
Note: Unrecognized File Format

If you try to open a file format that GNOME Videos does not recognize, the application displays an error message and recommends a suitable codec.

You can double-click a video or audio file in GNOME Files to open it in the GNOME Videos window by default.

19.1.2 Opening a Video or Audio File By URI Location

  1. Click Videos › Open Location.

  2. Specify the URI location of the file you want to open, then click Open.

19.1.3 Playing a DVD, VCD, or CD

To play a DVD, VCD, or CD, insert the disc in the optical device of your computer, then click Movie › Play Disc.

To eject a DVD, VCD, or CD, click Movie › Eject.

To pause a movie or song that is playing, click the GNOME Videos Pause button, or click Movie › Play/Pause. When you pause a movie or song, the statusbar displays Paused and the time elapsed on the current movie or song.

To resume playing a movie or song, click the GNOME Videos Play button, or click Movie › Play/Pause.

To play or pause a movie, you can also press P.

To view properties of a movie or song, click View › Sidebar to make the sidebar appear. The dialog contains the title, artist, year, and duration of movie or song, video dimensions, codec, frame rate, and the audio bit rate.

19.1.4 Seeking Through Movies or Songs

To seek through movies or songs, use any of the following methods:

To skip forward

Click Go › Skip Forward. Alternatively, use .

To skip backward

Click Go › Skip Backward. Alternatively, use .

To move to next movie or song

Click Go › Next Chapter/Movie, or click the GNOME Videos Next button.

To move to previous movie or song

Click Go › Previous Chapter/Movie, or click the GNOME Videos Previous button.

19.1.5 Changing the Zoom Factor

To change the zoom factor of the display area, use any of the following methods:

To zoom to full screen mode

Click View › Fullscreen. Alternatively, press F.

To exit fullscreen mode, click Leave Fullscreen or press Esc.

To zoom to half size (50%) of the original movie or visualization

Click View › Fit Window to Movie › Resize 1:2.

To zoom to size (100%) of the original movie or visualization

Click View › Fit Window to Movie › Resize 1:1.

To zoom to double size (200%) of the original movie or visualization

Click View › Fit Window to Movie  › Resize 2:1.

To switch between different aspect ratios, click View › Aspect Ratio.

The default aspect ratio is Auto.

19.1.6 Showing or Hiding Controls

To hide the window controls of GNOME Videos, click View › Show Controls and deselect the option. To show the controls on the GNOME Videos window, right-click the window, then select Show Controls. If the Show Controls option is selected, GNOME Videos shows the menubar, time elapsed slider, seek control buttons, volume slider, and statusbar on the window. If the Show Controls option is not selected, the application hides these controls and shows only the display area.

19.1.7 Managing Playlists

To show the playlist, click View › Sidebar. The Playlist sidebar is displayed.

You can use the Playlist dialog to do the following:

  • To add a track or movie:  Click the Add button. Select the file you want to add to the playlist, then click OK.

  • To remove a track or movie:  Select the file names from the file name list box, then click Remove.

  • To save a playlist to file:  Click the Save Playlist button, then specify a file name.

  • To move a track or movie up the playlist:  Select the file name from the file name list box, then click the Move Up button.

  • To move a track or movie down the playlist:  Select the file name from the file name list box, then click the Move Down button.

To hide the playlist, click View › Sidebar, or click the Sidebar button.

To enable or disable repeat mode, click Edit › Repeat Mode. To enable or disable shuffle mode, click Edit › Shuffle Mode.

19.1.8 Choosing Subtitles

To choose the language of the subtitles, click View › Subtitles › Select Text Subtitles, then select the subtitles language (DVD) or subtitle file (AVI etc.) you want to display.

To disable the display of subtitles, click View › Subtitles › None.

By default, GNOME Videos chooses the same language for the subtitles that you use on your computer.

GNOME Videos automatically loads and displays subtitles if the file that contains them has the same name as the video file. It supports the following subtitle file extensions: srt, asc, txt, sub, smi, or ssa.

19.2 Modifying GNOME Videos Preferences

To modify GNOME Videos preferences, click Videos › Preferences.

19.2.1 General Preferences

The General Preferences let you select a network connection speed, specify if media files should be played from the last used position, and change the font and encoding used to display subtitles.

GNOME Videos General Preferences
Figure 19.2: GNOME Videos General Preferences

General Preferences include the following:

Playback

Lets you specify whether to start playing the movie from the last position.

Networking

Select network connection speed from the Connection speed drop-down box.

Text Subtitles

Lets you specify whether to load the subtitles automatically, and change the font and encoding used to display the subtitles.

19.2.2 Display Preferences

The Display Preferences let you choose to automatically resize the window when a new video is loaded, change the color balance, and configure visual effects when an audio file is played.

GNOME Videos Display Preferences
Figure 19.3: GNOME Videos Display Preferences

Display Preferences include the following:

Automatically resize the window when a new video is loaded

Select this option if you want GNOME Videos to automatically resize the window when a new video is loaded.

Disable the screen saver when playing video or audio

Select this option if you want GNOME Videos to automatically disable the desktop screen saver while an audio file is playing.

Visual Effects

You can choose to show visual effects when an audio file is playing, select the type of visualization you want to show, and the visualization size.

Color Balance

Specify the level of color brightness, contrast, saturation, and hue.

19.2.3 Audio Preferences

The Audio Preferences dialog lets you select the audio output type.

GNOME Videos Audio Preferences
Figure 19.4: GNOME Videos Audio Preferences

20 Brasero: Burning CDs and DVDs

  • Filename: apps_brasero.xml
  • ID: cha.gnome.burn
Abstract

Brasero is a GNOME program for writing data and audio CDs and DVDs. Start the program from the main menu by clicking Applications › Sound & Video › Brasero.

The following sections are a quick introduction on how to create your own CD or DVD.

20.1 Creating a Data CD or DVD

After starting Brasero for the first time, the main window appears as shown in Figure 20.1.

Main View of Brasero
Figure 20.1: Main View of Brasero

To create a data CD or DVD, proceed as follows:

  1. Click Data project or select Project › New Project › New Data Project. The project view appears.

  2. Drag and drop the desired directories or individual files either from your file manager or by clicking the plus icon. To show your directory structure directly in Brasero, select View › Show Side Panel or press F7.

  3. Optionally, save the project under a name of your choice with Project › Save As.

  4. Name your medium. The original label is Data disc (date).

  5. Choose the output medium from the pull down menu next to the Burn button (CD/DVD or an ISO image file).

  6. Click Burn. A new dialog appears, depending on what medium you have selected in the previous step:

    • CD/DVD.  You can define some parameters, like the burning speed or where to store temporary files. Under Options you can also choose whether to burn the image directly, close the session, verify the written data, and others.

    • ISO Image.  Specify a file name for your ISO image file.

  7. Start the process with Burn.

20.2 Creating an Audio CD

There are no significant differences between creating an audio CD and creating a data CD. Proceed as follows:

  1. Select Project › New Project › New Audio Project.

  2. Drag and drop the individual audio tracks to the project directory. The audio data must be in WAV or Ogg Vorbis format. Determine the sequence of the tracks by moving them up or down in the project directory.

  3. Click Burn. A dialog opens.

  4. Specify a drive to write to.

  5. Click Properties to adjust burning speed and other preferences. When burning audio CDs, choose a lower burning speed to reduce the risk of burn errors.

  6. Click Burn.

20.3 Copying a CD or DVD

To copy a CD or DVD, proceed as follows:

  1. Click Disc Copy or go to Project › New Project › Copy Disc. The Copy CD/DVD dialog opens.

  2. Specify the source drive you want to copy.

  3. Specify a drive or image file to write to.

  4. If necessary, change the burning speed, the temporary directory and other options in Properties.

  5. Click Copy.

20.4 Writing ISO Images

If you already have an ISO image, click Burn image or go to Project › New Project › Burn Image. Choose the image to write and a disc to write to. If necessary, change parameters by clicking Properties. Choose the location of the image file with the pop-up menu labeled Path. Start the burning process and click Burn.

20.5 Creating a Multisession CD or DVD

Multisession discs can be used to write data in more than one burning session. This is useful, for example, for writing backups that are smaller than the media. In each session, you can add another backup file. One note of interest is that you are not only limited to data CDs or DVDs. You can also add audio sessions in a multisession disc.

To start a new multisession disc, do the following:

  1. Start with a data disc first as described in Section 20.1, “Creating a Data CD or DVD”. You cannot start with an audio CD session. Make sure that you do not fill up the entire disc, because otherwise you cannot append a new session.

  2. Click Burn. The window Disc Burning Setup opens.

  3. Select Leave the disc open to add other files later to make the disc multisession capable. Configure other options if needed.

  4. Start the burning session with Burn.

20.6 For More Information

You can find more information about Brasero at https://wiki.gnome.org/Apps/Brasero.

A Help and Documentation

  • Filename: help_user.xml
  • ID: cha.userhelp
Abstract

SUSE® Linux Enterprise Desktop comes with various sources of information and documentation, many of which are already integrated in your installed system:

Desktop Help Center

The help center of the GNOME desktop (Help) provides central access to the most important documentation resources on your system, in searchable form. These resources include online help for installed applications, man pages, info pages, and the SUSE manuals delivered with your product. Learn more in Section A.1, “Using GNOME Help”.

Separate Help Packages for Some Applications

When installing new software with YaST, the software documentation is installed automatically, and usually appears in the help center of your desktop. However, some applications, such as GIMP, may have different online help packages that can be installed separately with YaST and do not integrate into the help centers.

Documentation in /usr/share/doc

This traditional help directory holds various documentation files and the release notes for your system. Find more detailed information in Section 32.1, “Documentation Directory”.

Man Pages and Info Pages for Shell Commands

When working with the shell, you do not need to know the options of the commands by heart. Traditionally, the shell provides integrated help by means of man pages and info pages. Read more in Section 32.2, “Man Pages” and Section 32.3, “Info Pages”.

A.1 Using GNOME Help

On the GNOME desktop, to start Help directly from an application, either click the Help button or press F1. Both options take you directly to the application's documentation in the help center. However, you can also start Help by opening a terminal end entering yelp or from the main menu by clicking Applications › Favorites › Help.

Main Window of Help
Figure A.1: Main Window of Help

To see an overview of available application manuals, click the menu icon and select All Help.

The menu and the toolbar provide options for navigating the help center, for searching and for printing contents from Help. The help topics are grouped into categories presented as links. Click one of the links to open a list of topics for that category. To search for an item, click the search icon and enter the search string into the search field at the top of the window.

A.2 Additional Help Resources

In addition to the SUSE manuals installed under /usr/share/doc, you can also access the product-specific manuals and documentation on the Web. For an overview of all documentation available for SUSE Linux Enterprise Desktop check out your product-specific documentation Web page at http://www.suse.com/documentation/.

If you are searching for additional product-related information, you can also refer to the following Web sites:

You can also try general-purpose search engines. For example, use the search terms Linux CD-RW help or LibreOffice file conversion problem if you were having trouble with the CD burning or with LibreOffice file conversion.

A.3 For More Information

Apart from the product-specific help resources, there is a broad range of information available for Linux topics.

A.3.1 The Linux Documentation Project

The Linux Documentation Project (TLDP) is run by a team of volunteers who write Linux-related documentation (see http://www.tldp.org). The set of documents contains tutorials for beginners, but is mainly focused on experienced users and professional system administrators. TLDP publishes HOWTOs, FAQs, and guides (handbooks) under a free license. Parts of the documentation from TLDP is also available on SUSE Linux Enterprise Desktop.

A.3.1.1 Frequently Asked Questions

FAQs (frequently asked questions) are a series of questions and answers. They originate from Usenet newsgroups where the purpose was to reduce continuous reposting of the same basic questions.

A.3.1.2 Guides

Manuals and guides for various topics or programs can be found at http://www.tldp.org/guides.html. They range from Bash Guide for Beginners to Linux File System Hierarchy to Linux Administrator's Security Guide . Generally, guides are more detailed and exhaustive than HOWTOs or FAQs. They are usually written by experts for experts.

A.3.2 Wikipedia: The Free Online Encyclopedia

Wikipedia is a multilingual encyclopedia designed to be read and edited by anyone (see http://en.wikipedia.org). The content of Wikipedia is created by its users and is published under a dual free license (GFDL and CC-BY-SA). However, as Wikipedia can be edited by any visitor, it should be used only as a starting point or general guide. There is much incorrect or incomplete information in it.

A.3.3 Standards and Specifications

There are various sources that provide information about standards or specifications.

http://www.linux-foundation.org/en/LSB

The Linux Foundation is an independent nonprofit organization that promotes the distribution of free and open source software. The organization endeavors to achieve this by defining distribution-independent standards. The maintenance of several standards, such as the important LSB (Linux Standard Base), is supervised by this organization.

http://www.w3.org

The World Wide Web Consortium (W3C) is one of the best-known standards organizations. It was founded in October 1994 by Tim Berners-Lee and concentrates on standardizing Web technologies. W3C promotes the dissemination of open, license-free, and manufacturer-independent specifications, such as HTML, XHTML, and XML. These Web standards are developed in a four-stage process in working groups and are presented to the public as W3C recommendations (REC).

http://www.oasis-open.org

OASIS (Organization for the Advancement of Structured Information Standards) is an international consortium specializing in the development of standards for Web security, e-business, business transactions, logistics, and interoperability between various markets.

http://www.ietf.org

The Internet Engineering Task Force (IETF) is an internationally active cooperative of researchers, network designers, suppliers, and users. It concentrates on the development of Internet architecture and the smooth operation of the Internet by means of protocols.

Every IETF standard is published as an RFC (Request for Comments) and is available free-of-charge. There are six types of RFC: proposed standards, draft standards, Internet standards, experimental protocols, information documents, and historic standards. Only the first three (proposed, draft, and full) are IETF standards in the narrower sense (see http://www.ietf.org/rfc/rfc1796.txt).

http://www.ieee.org

The Institute of Electrical and Electronics Engineers (IEEE) is an organization that draws up standards in the areas of information technology, telecommunication, medicine and health care, transport, and others. IEEE standards are subject to a fee.

http://www.iso.org

The ISO Committee (International Organization for Standards) is the world's largest developer of standards and maintains a network of national standardization institutes in over 140 countries. ISO standards are subject to a fee.

http://www.din.de , http://www.din.com

The Deutsches Institut für Normung (DIN) is a registered technical and scientific association. It was founded in 1917. According to DIN, the organization is the institution responsible for standards in Germany and represents German interests in worldwide and European standards organizations.

The association brings together manufacturers, consumers, trade professionals, service companies, scientists and others who have an interest in the establishment of standards. The standards are subject to a fee and can be ordered using the DIN home page.

B Documentation Updates

  • Filename: gnome_docupdates.xml
  • ID: app.gnome.docupdates

This chapter lists content changes for this document.

This manual was updated on the following dates:

B.1 September 2017 (Initial Release of SUSE Linux Enterprise Desktop 12 SP3)

General
Bug Fixes

B.2 November 2016 (Initial Release of SUSE Linux Enterprise Desktop 12 SP2)

General
  • The e-mail address for documentation feedback has changed to doc-team@suse.com.

  • The documentation for Docker has been enhanced and renamed to Docker Guide.

Changes for this Guide
  • Book structure: Restructured and merged some sections.

  • Documentation updated from GNOME 3.10 to GNOME 3.20. Minor changes only.

  • Documentation updated for LibreOffice 5.1. Minor changes only. (FATE#320521)

B.3 December 2015 (Initial Release of SUSE Linux Enterprise Desktop 12 SP1)

General
  • SMT Guide is now part of the documentation for SUSE Linux Enterprise Desktop.

  • Add-ons provided by SUSE have been renamed as modules and extensions. The manuals have been updated to reflect this change.

  • Numerous small fixes and additions to the documentation, based on technical feedback.

  • The registration service has been changed from Novell Customer Center to SUSE Customer Center.

  • In YaST, you will now reach Network Settings via the System group. Network Devices is gone (https://bugzilla.suse.com/show_bug.cgi?id=867809).

Changes for this Guide
Bugfixes
  • Fixed inconsistent terminology referring to the Dash of GNOME Shell (from Doc Comments).

B.4 October 2014 (Initial Release of SUSE Linux Enterprise Desktop 12)

General
  • Removed all KDE documentation and references because KDE is no longer shipped.

  • Removed all references to SuSEconfig, which is no longer supported (Fate #100011).

  • Move from System V init to systemd (Fate #310421). Updated affected parts of the documentation.

  • YaST Runlevel Editor has changed to Services Manager (Fate #312568). Updated affected parts of the documentation.

  • Removed all references to ISDN support, as ISDN support has been removed (Fate #314594).

  • Removed all references to the YaST DSL module as it is no longer shipped (Fate #316264).

  • Removed all references to the YaST Modem module as it is no longer shipped (Fate #316264).

  • Btrfs has become the default file system for the root partition (Fate #315901). Updated affected parts of the documentation.

  • The dmesg now provides human-readable time stamps in ctime()-like format (Fate #316056). Updated affected parts of the documentation.

  • syslog and syslog-ng have been replaced by rsyslog (Fate #316175). Updated affected parts of the documentation.

  • MariaDB is now shipped as the relational database instead of MySQL (Fate #313595). Updated affected parts of the documentation.

  • SUSE-related products are no longer available from http://download.novell.com but from http://download.suse.com. Adjusted links accordingly.

  • Novell Customer Center has been replaced with SUSE Customer Center. Updated affected parts of the documentation.

  • /var/run is mounted as tmpfs (Fate #303793). Updated affected parts of the documentation.

  • The following architectures are no longer supported: IA64 and x86. Updated affected parts of the documentation.

  • The traditional method for setting up the network with ifconfig has been replaced by wicked. Updated affected parts of the documentation.

  • A lot of networking commands are deprecated and have been replaced by newer commands (usually ip). Updated affected parts of the documentation.

    arp: ip neighbor
    ifconfig: ip addr, ip link
    iptunnel: ip tunnel
    iwconfig: iw
    nameif: ip link, ifrename
    netstat: ss, ip route, ip -s link, ip maddr
    route: ip route
  • Numerous small fixes and additions to the documentation, based on technical feedback.

Changes for This Guide
  • Merged the Application Guide into this guide.

  • Merged the LibreOffice Quick Start into this guide.

  • Documentation updated from GNOME 2 to GNOME 3. Major user interface changes.

SUSE Linux Enterprise Desktop 12 SP3

Security Guide

Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.

Publication Date: May 07, 2018
About This Guide
Available Documentation
Feedback
Documentation Conventions
1 Security and Confidentiality
1.1 Local Security and Network Security
1.2 Some General Security Tips and Tricks
1.3 Using the Central Security Reporting Address
I Authentication
2 Authentication with PAM
2.1 What is PAM?
2.2 Structure of a PAM Configuration File
2.3 The PAM Configuration of sshd
2.4 Configuration of PAM Modules
2.5 Configuring PAM Using pam-config
2.6 Manually Configuring PAM
2.7 For More Information
3 Using NIS
3.1 Configuring NIS Servers
3.2 Configuring NIS Clients
4 Setting Up Authentication Servers and Clients Using YaST
4.1 Configuring an Authentication Server
4.2 Configuring an Authentication Client with YaST
4.3 SSSD
5 LDAP—A Directory Service
5.1 LDAP versus NIS
5.2 Structure of an LDAP Directory Tree
5.3 Configuring an LDAP Client with YaST
5.4 Configuring LDAP Users and Groups in YaST
5.5 For More Information
6 Network Authentication with Kerberos
6.1 Kerberos Terminology
6.2 How Kerberos Works
6.3 User View of Kerberos
6.4 Setting up Kerberos using LDAP and Kerberos Client
6.5 For More Information
7 Active Directory Support
7.1 Integrating Linux and Active Directory Environments
7.2 Background Information for Linux Active Directory Support
7.3 Configuring a Linux Client for Active Directory
7.4 Logging In to an Active Directory Domain
7.5 Changing Passwords
II Local Security
8 Configuring Security Settings with YaST
8.1 Security Overview
8.2 Predefined Security Configurations
8.3 Password Settings
8.4 Boot Settings
8.5 Login Settings
8.6 User Addition
8.7 Miscellaneous Settings
9 Authorization with PolKit
9.1 Conceptual Overview
9.2 Authorization Types
9.3 Querying Privileges
9.4 Modifying Configuration Files
9.5 Restoring the Default Privileges
10 Access Control Lists in Linux
10.1 Traditional File Permissions
10.2 Advantages of ACLs
10.3 Definitions
10.4 Handling ACLs
10.5 ACL Support in Applications
10.6 For More Information
11 Encrypting Partitions and Files
11.1 Setting Up an Encrypted File System with YaST
11.2 Using Encrypted Home Directories
11.3 Encrypting Files with GPG
12 Certificate Store
12.1 Activating Certificate Store
12.2 Importing Certificates
13 Intrusion Detection with AIDE
13.1 Why Use AIDE?
13.2 Setting Up an AIDE Database
13.3 Local AIDE Checks
13.4 System Independent Checking
13.5 For More Information
III Network Security
14 SSH: Secure Network Operations
14.1 ssh—Secure Shell
14.2 scp—Secure Copy
14.3 sftp—Secure File Transfer
14.4 The SSH Daemon (sshd)
14.5 SSH Authentication Mechanisms
14.6 Port Forwarding
14.7 For More Information
15 Masquerading and Firewalls
15.1 Packet Filtering with iptables
15.2 Masquerading Basics
15.3 Firewalling Basics
15.4 SuSEFirewall2
15.5 For More Information
16 Configuring a VPN Server
16.1 Conceptual Overview
16.2 Setting Up a Simple Test Scenario
16.3 Setting Up Your VPN Server Using a Certificate Authority
16.4 For More Information
17 Managing X.509 Certification
17.1 The Principles of Digital Certification
17.2 YaST Modules for CA Management
IV Confining Privileges with AppArmor
18 Introducing AppArmor
18.1 AppArmor Components
18.2 Background Information on AppArmor Profiling
19 Getting Started
19.1 Installing AppArmor
19.2 Enabling and Disabling AppArmor
19.3 Choosing Applications to Profile
19.4 Building and Modifying Profiles
19.5 Updating Your Profiles
20 Immunizing Programs
20.1 Introducing the AppArmor Framework
20.2 Determining Programs to Immunize
20.3 Immunizing cron Jobs
20.4 Immunizing Network Applications
21 Profile Components and Syntax
21.1 Breaking an AppArmor Profile into Its Parts
21.2 Profile Types
21.3 Include Statements
21.4 Capability Entries (POSIX.1e)
21.5 Network Access Control
21.6 Profile Names, Flags, Paths, and Globbing
21.7 File Permission Access Modes
21.8 Execute Modes
21.9 Resource Limit Control
21.10 Auditing Rules
22 AppArmor Profile Repositories
23 Building and Managing Profiles with YaST
23.1 Manually Adding a Profile
23.2 Editing Profiles
23.3 Deleting a Profile
23.4 Managing AppArmor
24 Building Profiles from the Command Line
24.1 Checking the AppArmor Status
24.2 Building AppArmor Profiles
24.3 Adding or Creating an AppArmor Profile
24.4 Editing an AppArmor Profile
24.5 Unloading Unknown AppArmor Profiles
24.6 Deleting an AppArmor Profile
24.7 Two Methods of Profiling
24.8 Important File Names and Directories
25 Profiling Your Web Applications Using ChangeHat
25.1 Configuring Apache for mod_apparmor
25.2 Managing ChangeHat-Aware Applications
26 Confining Users with pam_apparmor
27 Managing Profiled Applications
27.1 Reacting to Security Event Rejections
27.2 Maintaining Your Security Profiles
28 Support
28.1 Updating AppArmor Online
28.2 Using the Man Pages
28.3 For More Information
28.4 Troubleshooting
28.5 Reporting Bugs for AppArmor
29 AppArmor Glossary
V The Linux Audit Framework
30 Understanding Linux Audit
30.1 Introducing the Components of Linux Audit
30.2 Configuring the Audit Daemon
30.3 Controlling the Audit System Using auditctl
30.4 Passing Parameters to the Audit System
30.5 Understanding the Audit Logs and Generating Reports
30.6 Querying the Audit Daemon Logs with ausearch
30.7 Analyzing Processes with autrace
30.8 Visualizing Audit Data
30.9 Relaying Audit Event Notifications
31 Setting Up the Linux Audit Framework
31.1 Determining the Components to Audit
31.2 Configuring the Audit Daemon
31.3 Enabling Audit for System Calls
31.4 Setting Up Audit Rules
31.5 Configuring Audit Reports
31.6 Configuring Log Visualization
32 Introducing an Audit Rule Set
32.1 Adding Basic Audit Configuration Parameters
32.2 Adding Watches on Audit Log Files and Configuration Files
32.3 Monitoring File System Objects
32.4 Monitoring Security Configuration Files and Databases
32.5 Monitoring Miscellaneous System Calls
32.6 Filtering System Call Arguments
32.7 Managing Audit Event Records Using Keys
33 Useful Resources
A Documentation Updates
A.1 September 2017 (Initial Release of SUSE Linux Enterprise Desktop 12 SP3)
A.2 November 2016 (Initial Release of SUSE Linux Enterprise Desktop 12 SP2)
A.3 March 2016 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP1)
A.4 December 2015 (Initial Release of SUSE Linux Enterprise Desktop 12 SP1)
A.5 February 2015 (Documentation Maintenance Update)
A.6 October 2014 (Initial Release of SUSE Linux Enterprise Desktop 12)
B GNU Licenses
B.1 GNU Free Documentation License

Copyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

About This Guide

  • Filename: security_intro.xml
  • ID: preface.security

This manual introduces the basic concepts of system security on SUSE Linux Enterprise Desktop. It covers extensive documentation about the authentication mechanisms available on Linux, such as NIS or LDAP. It deals with aspects of local security like access control lists, encryption and intrusion detection. In the network security part you learn how to secure computers with firewalls and masquerading, and how to set up virtual private networks (VPN). This manual shows how to use security software like AppArmor (which lets you specify per program which files the program may read, write, and execute) or the auditing system that collects information about security-relevant events.

1 Available Documentation

  • Filename: common_intro_available_doc_i.xml
  • ID: no ID found
Note
Note: Online Documentation and Latest Updates

Documentation for our products is available at http://www.suse.com/documentation/, where you can also find the latest updates, and browse or download the documentation in various formats.

In addition, the product documentation is usually available in your installed system under /usr/share/doc/manual.

The following documentation is available for this product:

Installation Quick Start

Lists the system requirements and guides you step-by-step through the installation of SUSE Linux Enterprise Desktop from DVD, or from an ISO image.

Deployment Guide

Shows how to install single or multiple systems and how to exploit the product inherent capabilities for a deployment infrastructure. Choose from various approaches, ranging from a local installation or a network installation server to a mass deployment using a remote-controlled, highly-customized, and automated installation technique.

Administration Guide

Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.

Security Guide

Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.

System Analysis and Tuning Guide

An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.

GNOME User Guide

Introduces the GNOME desktop of SUSE Linux Enterprise Desktop. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.

2 Feedback

  • Filename: common_intro_feedback_i.xml
  • ID: no ID found

Several feedback channels are available:

Bugs and Enhancement Requests

For services and support options available for your product, refer to http://www.suse.com/support/.

Help for openSUSE is provided by the community. Refer to https://en.opensuse.org/Portal:Support for more information.

To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click Create New.

User Comments

We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/documentation/feedback.html and enter your comments there.

Mail

For feedback on the documentation of this product, you can also send a mail to doc-team@suse.com. Make sure to include the document title, the product version and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).

3 Documentation Conventions

  • Filename: common_intro_typografie_i.xml
  • ID: no ID found

The following notices and typographical conventions are used in this documentation:

  • /etc/passwd: directory names and file names

  • PLACEHOLDER: replace PLACEHOLDER with the actual value

  • PATH: the environment variable PATH

  • ls, --help: commands, options, and parameters

  • user: users or groups

  • package name : name of a package

  • Alt, AltF1: a key to press or a key combination; keys are shown in uppercase as on a keyboard

  • File, File › Save As: menu items, buttons

  • Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.

  • Commands that must be run with root privileges. Often you can also prefix these commands with the sudo command to run them as non-privileged user.

    root # command
    tux > sudo command
  • Commands that can be run by non-privileged users.

    tux > command
  • Notices

    Warning
    Warning: Warning Notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important
    Important: Important Notice

    Important information you should be aware of before proceeding.

    Note
    Note: Note Notice

    Additional information, for example about differences in software versions.

    Tip
    Tip: Tip Notice

    Helpful information, like a guideline or a piece of practical advice.

1 Security and Confidentiality

  • Filename: security_preface.xml
  • ID: cha.security

One of the main characteristics of a Linux or Unix system is its ability to handle several users at the same time (multiuser) and to allow these users to perform several tasks (multitasking) on the same computer simultaneously. Moreover, the operating system is network transparent. The users often do not know whether the data and applications they are using are provided locally from their machine or made available over the network.

With the multiuser capability, the data of different users must be stored separately, and security and privacy need to be guaranteed. Data security was already an important issue, even before computers could be linked through networks. Like today, the most important concern was the ability to keep data available in spite of a lost or otherwise damaged data medium (usually a hard disk).

This section is primarily focused on confidentiality issues and on ways to protect the privacy of users. But it cannot be stressed enough that a comprehensive security concept should always include procedures to have a regularly updated, workable, and tested backup in place. Without this, you could have a very hard time getting your data back—not only in the case of some hardware defect, but also in the case that someone has gained unauthorized access and tampered with files.

1.1 Local Security and Network Security

There are several ways of accessing data:

  • personal communication with people who have the desired information or access to the data on a computer

  • directly through physical access from the console of a computer

  • over a serial line

  • using a network link

In all these cases, a user should be authenticated before accessing the resources or data in question. A Web server might be less restrictive in this respect, but you still would not want it to disclose your personal data to an anonymous user.

In the list above, the first case is the one where the highest amount of human interaction is involved (such as when you are contacting a bank employee and are required to prove that you are the person owning that bank account). Then, you are asked to provide a signature, a PIN, or a password to prove that you are the person you claim to be. In some cases, it might be possible to elicit some intelligence from an informed person by mentioning known bits and pieces to win the confidence of that person. The victim could be led to reveal gradually more information, maybe without even being aware of it. Among hackers, this is called social engineering. You can only guard against this by educating people and by dealing with language and information in a conscious way. Before breaking into computer systems, attackers often try to target receptionists, service people working with the company, or even family members. Often such an attack based on social engineering is only discovered at a much later time.

A person wanting to obtain unauthorized access to your data could also use the traditional way and try to get at your hardware directly. Therefore, the machine should be protected against any tampering so that no one can remove, replace, or cripple its components. This also applies to backups and even any network cables or power cords. Also secure the boot procedure, because there are some well-known key combinations that might provoke unusual behavior. Protect yourself against this by setting passwords for the BIOS and the boot loader.

Serial terminals connected to serial ports are still used in many places. Unlike network interfaces, they do not rely on network protocols to communicate with the host. A simple cable or an infrared port is used to send plain characters back and forth between the devices. The cable itself is the weakest point of such a system: with an older printer connected to it, it is easy to record any data being transferred that way. What can be achieved with a printer can also be accomplished in other ways, depending on the effort that goes into the attack.

Reading a file locally on a host requires additional access rules than opening a network connection with a server on a different host. There is a distinction between local security and network security. The line is drawn where data must be put into packets to be sent somewhere else.

1.1.1 Local Security

Local security starts with the physical environment at the location in which computer is running. Set up your machine in a place where security is in line with your expectations and needs. The main goal of local security is to keep users separate from each other, so no user can assume the permissions or the identity of another. This is a general rule to be observed, but it is especially true for the user root, who holds system administration privileges. root can take on the identity of any other local user and read any locally-stored file without being prompted for the password.

1.1.1.1 Passwords

On a Linux system, passwords are not stored as plain text and the entered text string is not simply matched with the saved pattern. If this were the case, all accounts on your system would be compromised when someone got access to the corresponding file. Instead, the stored password is encrypted and, each time it is entered, is encrypted again and the two encrypted strings are compared. This only provides more security if the encrypted password cannot be reverse-computed into the original text string.

This is achieved by a special kind of algorithm, also called trapdoor algorithm, because it only works in one direction. An attacker who has obtained the encrypted string is not able to get your password by simply applying the same algorithm again. Instead, it would be necessary to test all the possible character combinations until a combination is found that looks like your password when encrypted. With passwords eight characters long, there are many combinations to calculate.

In the seventies, it was argued that this method would be more secure than others because of the relative slowness of the algorithm used which took a few seconds to encrypt one password. In the meantime, PCs have become powerful enough to do several hundred thousand or even millions of encryptions per second. Because of this, encrypted passwords should not be visible to regular users (/etc/shadow cannot be read by normal users). It is even more important that passwords are not easy to guess, in case the password file becomes visible because of an error. Consequently, it is not really useful to translate a password like tantalize into t@nt@1lz3.

Replacing some letters of a word with similar looking numbers (like writing the password tantalize as t@nt@1lz3) is not sufficient. Password cracking programs that use dictionaries to guess words also play with substitutions like that. A better way is to make up a word that only makes sense to you personally, like the first letters of the words of a sentence or the title of a book, such as The Name of the Rose by Umberto Eco. This would give the following safe password: TNotRbUE9. In contrast, passwords like beerbuddy or jasmine76 are easily guessed even by someone who has only some casual knowledge about you.

1.1.1.2 The Boot Procedure

Configure your system so it cannot be booted from a removable device, either by removing the drives entirely or by setting a BIOS password and configuring the BIOS to allow booting from a hard disk only. Normally, a Linux system is started by a boot loader, allowing you to pass additional options to the booted kernel. Prevent others from using such parameters during boot by setting an additional password for the boot loader (see Section 13.2.6, “Setting a Boot Password” for instructions). This is crucial to your system's security. Not only does the kernel itself run with root permissions, but it is also the first authority to grant root permissions at system start-up.

1.1.1.3 File Permissions

As a general rule, always work with the most restrictive privileges possible for a given task. For example, it is definitely not necessary to be root to read or write e-mail. If the mail program has a bug, this bug could be exploited for an attack that acts with exactly the permissions of the program when it was started. By following the above rule, minimize the possible damage.

The permissions of all files included in the SUSE Linux Enterprise Desktop distribution are carefully chosen. A system administrator who installs additional software or other files should take great care when doing so, especially when setting the permission bits. Experienced and security-conscious system administrators always use the -l option with the command ls to get an extensive file list, which allows them to detect any incorrect file permissions immediately. An incorrect file attribute does not only mean that files could be changed or deleted. These modified files could be executed by root or, in the case of configuration files, programs could use such files with the permissions of root. This significantly increases the possibilities of an attack. Attacks like these are called cuckoo eggs, because the program (the egg) is executed (hatched) by a different user (bird), similar to how a cuckoo tricks other birds into hatching its eggs.

An SUSE® Linux Enterprise Desktop system includes the files permissions, permissions.easy, permissions.secure, and permissions.paranoid, all in the directory /etc. The purpose of these files is to define special permissions, such as world-writable directories or, for files, the setuser ID bit (programs with the setuser ID bit set do not run with the permissions of the user that has launched it, but with the permissions of the file owner, usually root). An administrator can use the file /etc/permissions.local to add his own settings.

To define which of the above files is used by SUSE Linux Enterprise Desktop's configuration programs to set permissions, select Local Security in the Security and Users section of YaST. To learn more about the topic, read the comments in /etc/permissions or consult the manual page of chmod (man chmod).

1.1.1.4 Buffer Overflows and Format String Bugs

Special care must be taken whenever a program needs to process data that could be changed by a user, but this is more of an issue for the programmer of an application than for regular users. The programmer must make sure that his application interprets data in the correct way, without writing it into memory areas that are too small to hold it. Also, the program should hand over data in a consistent manner, using interfaces defined for that purpose.

A buffer overflow can happen if the actual size of a memory buffer is not taken into account when writing to that buffer. There are cases where this data (as generated by the user) uses up more space than what is available in the buffer. As a result, data is written beyond the end of that buffer area, which, under certain circumstances, makes it possible for a program to execute program sequences influenced by the user (and not by the programmer), rather than processing user data only. A bug of this kind may have serious consequences, especially if the program is being executed with special privileges (see Section 1.1.1.3, “File Permissions”).

Format string bugs work in a slightly different way, but again it is the user input that could lead the program astray. Usually, these programming errors are exploited with programs executed with special permissions—setuid and setgid programs—which also means that you can protect your data and your system from such bugs by removing the corresponding execution privileges from programs. Again, the best way is to apply a policy of using the lowest possible privileges (see Section 1.1.1.3, “File Permissions”).

Given that buffer overflows and format string bugs are related to the handling of user data, they are only exploitable if access has been given to a local account. Many of the bugs that have been reported can also be exploited over a network link. Accordingly, buffer overflows and format string bugs should be classified as being relevant for both local and network security.

1.1.1.5 Viruses

Contrary to popular opinion, there are viruses that run on Linux. However, the viruses that are known were released by their authors as a proof of concept that the technique works as intended. None of these viruses have been spotted in the wild so far.

Viruses cannot survive and spread without a host on which to live. In this case, the host would be a program or an important storage area of the system (for example, the master boot record) that needs to be writable for the program code of the virus. Because of its multiuser capability, Linux can restrict write access to certain files (this is especially important with system files). Therefore, if you did your normal work with root permissions, you would increase the chance of the system being infected by a virus. In contrast, if you follow the principle of using the lowest possible privileges as mentioned above, chances of getting a virus are slim.

Apart from that, you should never rush into executing a program from some Internet site that you do not really know. SUSE Linux Enterprise Desktop's RPM packages carry a cryptographic signature, as a digital label that the necessary care was taken to build them. Viruses are a typical sign that the administrator or the user lacks the required security awareness, putting at risk even a system that should be highly secure by its very design.

Viruses should not be confused with worms, which belong entirely to the world of networks. Worms do not need a host to spread.

1.1.2 Network Security

Network security is important for protecting from an attack that is started outside the network. The typical login procedure requiring a user name and a password for user authentication is still a local security issue. In the particular case of logging in over a network, differentiate between the two security aspects. What happens until the actual authentication is network security and anything that happens afterward is local security.

1.1.2.1 X Window System and X Authentication

As mentioned at the beginning, network transparency is one of the central characteristics of a Unix system. X, the windowing system of Unix operating systems, can use this feature in an impressive way. With X, it is no problem to log in to a remote host and start a graphical program that is then sent over the network to be displayed on your computer.

When an X client needs to be displayed remotely using an X server, the latter should protect the resource managed by it (the display) from unauthorized access. In more concrete terms, certain permissions must be given to the client program. With the X Window System, there are two ways to do this, called host-based access control and cookie-based access control. The former relies on the IP address of the host where the client should run. The program to control this is xhost. xhost enters the IP address of a legitimate client into a database belonging to the X server. However, relying on IP addresses for authentication is not very secure. For example, if there were a second user working on the host sending the client program, that user would have access to the X server as well—like someone stealing the IP address. Because of these shortcomings, this authentication method is not described in more detail here, but you can learn about it with man xhost.

In the case of cookie-based access control, a character string is generated that is only known to the X server and to the legitimate user, like an ID card of some kind. This cookie is stored on login in the file .Xauthority in the user's home directory and is available to any X client wanting to use the X server to display a window. The file .Xauthority can be examined by the user with the tool xauth. If you rename .Xauthority, or if you delete the file from your home directory by accident, you would not be able to open any new windows or X clients.

SSH (secure shell) can be used to encrypt a network connection and forward it to an X server transparently. This is also called X forwarding. X forwarding is achieved by simulating an X server on the server side and setting a DISPLAY variable for the shell on the remote host. Further details about SSH can be found in Chapter 14, SSH: Secure Network Operations.

Warning
Warning: X Forwarding Can Be Insecure

If you do not consider the host where you log in to be a secure host, do not use X forwarding. If X forwarding is enabled, an attacker could authenticate via your SSH connection. The attacker could then intrude on your X server and, for example, read your keyboard input.

1.1.2.2 Buffer Overflows and Format String Bugs

As discussed in Section 1.1.1.4, “Buffer Overflows and Format String Bugs”, buffer overflows and format string bugs should be classified as issues applying to both local and network security. As with the local variants of such bugs, buffer overflows in network programs, when successfully exploited, are mostly used to obtain root permissions. Even if that is not the case, an attacker could use the bug to gain access to an unprivileged local account to exploit other vulnerabilities that might exist on the system.

Buffer overflows and format string bugs exploitable over a network link are certainly the most frequent form of remote attacks, in general. Exploits for these—programs to exploit these newly-found security holes—are often posted on security mailing lists. They can be used to target the vulnerability without knowing the details of the code.

Experience has shown that the availability of exploit codes has contributed to more secure operating systems, as they force operating system makers to fix problems in their software. With free software, anyone has access to the source code (SUSE Linux Enterprise Desktop comes with complete source code) and anyone who finds a vulnerability and its exploit code can submit a patch to fix the corresponding bug.

1.1.2.3 Denial of Service

The purpose of a denial of service (DoS) attack is to block a server program or even an entire system. This can be achieved in several ways: overloading the server, keeping it busy with garbage packets, or exploiting a remote buffer overflow. Often, a DoS attack is made with the sole purpose of making the service disappear. However, when a given service has become unavailable, communications could become vulnerable to man-in-the-middle attacks (sniffing, TCP connection hijacking, spoofing) and DNS poisoning.

1.1.2.4 Man in the Middle: Sniffing, Hijacking, Spoofing

In general, any remote attack performed by an attacker who puts himself between the communicating hosts is called a man-in-the-middle attack. What almost all types of man-in-the-middle attacks have in common is that the victim is usually not aware that there is something happening. There are many variants. For example, the attacker could pick up a connection request and forward that to the target machine. Now the victim has unwittingly established a connection with the wrong host, because the other end is posing as the legitimate destination machine.

The simplest form of a man-in-the-middle attack is called sniffer (the attacker is only listening to the network traffic passing by). As a more complex attack, the man in the middle could try to take over an already established connection (hijacking). To do so, the attacker would need to analyze the packets for some time to be able to predict the TCP sequence numbers belonging to the connection. When the attacker finally seizes the role of the target host, the victims notice this, because they get an error message saying the connection was terminated because of a failure. That there are protocols not secured against hijacking through encryption (which only perform a simple authentication procedure upon establishing the connection) makes it easier for attackers.

Spoofing is an attack where packets are modified to contain counterfeit source data, usually the IP address. Most active forms of attack rely on sending out such fake packets (something that, on a Linux machine, can only be done by the superuser (root)).

Many of the attacks mentioned are carried out in combination with a DoS. If an attacker sees an opportunity to bring down a certain host abruptly, even if only for a short time, it makes it easier for him to push the active attack, because the host cannot interfere with the attack for some time.

1.1.2.5 DNS Poisoning

DNS poisoning means that the attacker corrupts the cache of a DNS server by replying to it with spoofed DNS reply packets, trying to get the server to send certain data to a victim who is requesting information from that server. Many servers maintain a trust relationship with other hosts, based on IP addresses or host names. The attacker needs a good understanding of the actual structure of the trust relationships among hosts to disguise itself as one of the trusted hosts. Usually, the attacker analyzes some packets received from the server to get the necessary information. The attacker often needs to target a well-timed DoS attack at the name server as well. Protect yourself by using encrypted connections that can verify the identity of the hosts to which to connect.

1.1.2.6 Worms

Worms are often confused with viruses, but there is a clear difference between the two. Unlike viruses, worms do not need to infect a host program to live. Instead, they are specialized to spread as quickly as possible on network structures. The worms that appeared in the past, such as Ramen, Lion, or Adore, used well-known security holes in server programs like bind8. Protection against worms is relatively easy. Given that some time elapses between the discovery of a security hole and the moment the worm hits your server, there is a good chance that an updated version of the affected program is available on time. That is only useful if the administrator actually installs the security updates on the systems in question.

1.2 Some General Security Tips and Tricks

To handle security competently, it is important to observe some recommendations. You may find the following list of rules useful in dealing with basic security concerns:

  • Get and install the updated packages recommended by security announcements as quickly as possible.

  • Stay informed about the latest security issues:

  • Discuss any security issues of interest on our mailing list opensuse-security@opensuse.org.

  • According to the rule of using the most restrictive set of permissions possible for every job, avoid doing your regular jobs as root. This reduces the risk of getting a cuckoo egg or a virus and protects you from your own mistakes.

  • If possible, always try to use encrypted connections to work on a remote machine. Using ssh (secure shell) to replace telnet, ftp, rsh, and rlogin should be standard practice.

  • Avoid using authentication methods based solely on IP addresses.

  • Try to keep the most important network-related packages up-to-date and subscribe to the corresponding mailing lists to receive announcements on new versions of such programs (bind, postfix, ssh, etc.). The same should apply to software relevant to local security.

  • Change the /etc/permissions file to optimize the permissions of files crucial to your system's security. If you remove the setuid bit from a program, it might well be that it cannot do its job anymore in the intended way. On the other hand, the program will usually have ceased to be a potential security risk. You might take a similar approach with world-writable directories and files.

  • Disable any network services you do not absolutely require for your server to work properly. This makes your system safer. Open ports, with the socket state LISTEN, can be found with the program netstat. As for the options, it is recommended to use netstat -ap or netstat -anp. The -p option allows you to see which process is occupying a port under which name.

    Compare the netstat results with those of a thorough port scan done from outside your host. An excellent program for this job is nmap, which not only checks out the ports of your machine, but also draws some conclusions as to which services are waiting behind them. However, port scanning may be interpreted as an aggressive act, so do not do this on a host without the explicit approval of the administrator. Finally, remember that it is important not only to scan TCP ports, but also UDP ports (options -sS and -sU).

  • To monitor the integrity of the files of your system in a reliable way, use the program AIDE (Advanced Intrusion Detection Environment), available on SUSE Linux Enterprise Desktop. Encrypt the database created by AIDE to prevent someone from tampering with it. Furthermore, keep a backup of this database available outside your machine, stored on an external data medium not connected to it by a network link.

  • Take proper care when installing any third-party software. There have been cases where a hacker had built a Trojan horse into the TAR archive of a security software package, which was fortunately discovered very quickly. If you install a binary package, have no doubts about the site from which you downloaded it.

    SUSE's RPM packages are gpg-signed. The key used by SUSE for signing is:

    ID:9C800ACA 2000-10-19 SUSE Package Signing Key <build@suse.de>
         Key fingerprint = 79C1 79B2 E1C8 20C1 890F 9994 A84E DAE8 9C80 0ACA

    The command rpm --checksig package.rpm shows whether the checksum and the signature of an uninstalled package are correct. Find the key on the first CD of the distribution and on most key servers worldwide.

  • Check backups of user and system files regularly. Consider that if you do not test whether the backup works, it might actually be worthless.

  • Check your log files. Whenever possible, write a small script to search for suspicious entries. Admittedly, this is not exactly a trivial task. In the end, only you can know which entries are unusual and which are not.

  • Use tcp_wrapper to restrict access to the individual services running on your machine, so you have explicit control over which IP addresses can connect to a service. For further information regarding tcp_wrapper, consult the manual pages of tcpd and hosts_access (man 8 tcpd, man hosts_access).

  • Use SuSEfirewall to enhance the security provided by tcpd (tcp_wrapper).

  • Design your security measures to be redundant: a message seen twice is much better than no message.

  • If you use suspend to disk, consider configuring the suspend image encryption using the configure-suspend-encryption.sh script. The program creates the key, copies it to /etc/suspend.key, and modifies /etc/suspend.conf to use encryption for suspend images.

1.3 Using the Central Security Reporting Address

If you discover a security-related problem (check the available update packages first), write an e-mail to <>. Include a detailed description of the problem and the version number of the package concerned. SUSE will try to send a reply when possible. You are encouraged to pgp-encrypt your e-mail messages. SUSE's PGP key is:

ID:3D25D3D9 1999-03-06 SUSE Security Team <security@suse.de>
Key fingerprint = 73 5F 2E 99 DF DB 94 C4 8F 5A A3 AE AF 22 F2 D5

This key is also available for download from http://www.suse.com/support/security/contact.html.

Part I Authentication

2 Authentication with PAM

Linux uses PAM (pluggable authentication modules) in the authentication process as a layer that mediates between user and application. PAM modules are available on a systemwide basis, so they can be requested by any application. This chapter describes how the modular authentication mechanism works and how it is configured.

3 Using NIS

When multiple Unix systems in a network access common resources, it becomes imperative that all user and group identities are the same for all machines in that network. The network should be transparent to users: their environments should not vary, regardless of which machine they are actually using. This can be done by means of NIS and NFS services. NFS distributes file systems over a network and is discussed in Chapter 26, Sharing File Systems with NFS.

NIS (Network Information Service) can be described as a database-like service that provides access to the contents of /etc/passwd, /etc/shadow, and /etc/group across networks. NIS can also be used for other purposes (making the contents of files like /etc/hosts or /etc/services available, for example), but this is beyond the scope of this introduction. People often refer to NIS as YP, because it works like the network's yellow pages.

4 Setting Up Authentication Servers and Clients Using YaST

The Authentication Server is based on LDAP and optionally Kerberos. On SUSE Linux Enterprise Desktop you can configure it with a YaST wizard.

For more information about LDAP, see Chapter 5, LDAP—A Directory Service, and about Kerberos, see Chapter 6, Network Authentication with Kerberos.

5 LDAP—A Directory Service

The Lightweight Directory Access Protocol (LDAP) is a set of protocols designed to access and maintain information directories. LDAP can be used for user and group management, system configuration management, address management, and more. This chapter provides a basic understanding of how OpenLDAP works.

6 Network Authentication with Kerberos

An open network provides no means of ensuring that a workstation can identify its users properly, except through the usual password mechanisms. In common installations, the user must enter the password each time a service inside the network is accessed. Kerberos provides an authentication method wit…

7 Active Directory Support

Active Directory* (AD) is a directory-service based on LDAP, Kerberos, and other services. It is used by Microsoft* Windows* to manage resources, services, and people. In a Microsoft Windows network, Active Directory provides information about these objects, restricts access to them, and enforces po…

2 Authentication with PAM

  • Filename: security_pam.xml
  • ID: cha.pam
Abstract

Linux uses PAM (pluggable authentication modules) in the authentication process as a layer that mediates between user and application. PAM modules are available on a systemwide basis, so they can be requested by any application. This chapter describes how the modular authentication mechanism works and how it is configured.

2.1 What is PAM?

System administrators and programmers often want to restrict access to certain parts of the system or to limit the use of certain functions of an application. Without PAM, applications must be adapted every time a new authentication mechanism, such as LDAP, Samba, or Kerberos, is introduced. However, this process is time-consuming and error-prone. One way to avoid these drawbacks is to separate applications from the authentication mechanism and delegate authentication to centrally managed modules. Whenever a newly required authentication scheme is needed, it is sufficient to adapt or write a suitable PAM module for use by the program in question.

The PAM concept consists of:

  • PAM modules, which are a set of shared libraries for a specific authentication mechanism.

  • A module stack with of one or more PAM modules.

  • A PAM-aware service which needs authentication by using a module stack or PAM modules. Usually a service is a familiar name of the corresponding application, like login or su. The service name other is a reserved word for default rules.

  • Module arguments, with which the execution of a single PAM module can be influenced.

  • A mechanism evaluating each result of a single PAM module execution. A positive value executes the next PAM module. The way a negative value is dealt with depends on the configuration: no influence, proceed up to terminate immediately and anything in between are valid options.

2.2 Structure of a PAM Configuration File

PAM can be configured in two ways:

File based configuration (/etc/pam.conf)

The configuration of each service is stored in /etc/pam.conf. However, for maintenance and usability reasons, this configuration scheme is not used in SUSE Linux Enterprise Desktop.

Directory based configuration (/etc/pam.d/)

Every service (or program) that relies on the PAM mechanism has its own configuration file in the /etc/pam.d/ directory. For example, the service for sshd can be found in the /etc/pam.d/sshd file.

The files under /etc/pam.d/ define the PAM modules used for authentication. Each file consists of lines, which define a service, and each line consists of a maximum of four components:

TYPE  CONTROL
 MODULE_PATH  MODULE_ARGS

The components have the following meaning:

TYPE

Declares the type of the service. PAM modules are processed as stacks. Different types of modules have different purposes. For example, one module checks the password, another verifies the location from which the system is accessed, and yet another reads user-specific settings. PAM knows about four different types of modules:

auth

Check the user's authenticity, traditionally by querying a password. However, this can also be achieved with a chip card or through biometrics (for example, fingerprints or iris scan).

account

Modules of this type check if the user has general permission to use the requested service. As an example, such a check should be performed to ensure that no one can log in with the user name of an expired account.

password

The purpose of this type of module is to enable the change of an authentication token. Usually this is a password.

session

Modules of this type are responsible for managing and configuring user sessions. They are started before and after authentication to log login attempts and configure the user's specific environment (mail accounts, home directory, system limits, etc.).

CONTROL

Indicates the behavior of a PAM module. Each module can have the following control flags:

required

A module with this flag must be successfully processed before the authentication may proceed. After the failure of a module with the required flag, all other modules with the same flag are processed before the user receives a message about the failure of the authentication attempt.

requisite

Modules having this flag must also be processed successfully, in much the same way as a module with the required flag. However, in case of failure a module with this flag gives immediate feedback to the user and no further modules are processed. In case of success, other modules are subsequently processed, like any modules with the required flag. The requisite flag can be used as a basic filter checking for the existence of certain conditions that are essential for a correct authentication.

sufficient

After a module with this flag has been successfully processed, the requesting application receives an immediate message about the success and no further modules are processed, provided there was no preceding failure of a module with the required flag. The failure of a module with the sufficient flag has no direct consequences, in the sense that any subsequent modules are processed in their respective order.

optional

The failure or success of a module with this flag does not have any direct consequences. This can be useful for modules that are only intended to display a message (for example, to tell the user that mail has arrived) without taking any further action.

include

If this flag is given, the file specified as argument is inserted at this place.

MODULE_PATH

Contains a full file name of a PAM module. It does not need to be specified explicitly, as long as the module is located in the default directory /lib/security (for all 64-bit platforms supported by SUSE® Linux Enterprise Desktop, the directory is /lib64/security).

MODULE_ARGS

Contains a space-separated list of options to influence the behavior of a PAM module, such as debug (enables debugging) or nullok (allows the use of empty passwords).

In addition, there are global configuration files for PAM modules under /etc/security, which define the exact behavior of these modules (examples include pam_env.conf and time.conf). Every application that uses a PAM module actually calls a set of PAM functions, which then process the information in the various configuration files and return the result to the requesting application.

To simplify the creation and maintenance of PAM modules, common default configuration files for the types auth, account, password, and session modules have been introduced. These are retrieved from every application's PAM configuration. Updates to the global PAM configuration modules in common-* are thus propagated across all PAM configuration files without requiring the administrator to update every single PAM configuration file.

The global PAM configuration files are maintained using the pam-config tool. This tool automatically adds new modules to the configuration, changes the configuration of existing ones or deletes modules (or options) from the configurations. Manual intervention in maintaining PAM configurations is minimized or no longer required.

Note
Note: 64-Bit and 32-Bit Mixed Installations

When using a 64-bit operating system, it is possible to also include a runtime environment for 32-bit applications. In this case, make sure that you also install the 32-bit version of the PAM modules.

2.3 The PAM Configuration of sshd

Consider the PAM configuration of sshd as an example:

Example 2.1: PAM Configuration for sshd (/etc/pam.d/sshd)
#%PAM-1.0 1
auth     requisite      pam_nologin.so                              2
auth     include        common-auth                                 3
account  requisite      pam_nologin.so                              2
account  include        common-account                              3
password include        common-password                             3
session  required       pam_loginuid.so                             4
session  include        common-session                              3
session  optional       pam_lastlog.so   silent noupdate showfailed 5

1

Declares the version of this configuration file for PAM 1.0. This is merely a convention, but could be used in the future to check the version.

2

Checks, if /etc/nologin exists. If it does, no user other than root may log in.

3

Refers to the configuration files of four module types: common-auth, common-account, common-password, and common-session. These four files hold the default configuration for each module type.

4

Sets the login uid process attribute for the process that was authenticated.

5

Displays information about the last login of a user.

By including the configuration files instead of adding each module separately to the respective PAM configuration, you automatically get an updated PAM configuration when an administrator changes the defaults. Formerly, you needed to adjust all configuration files manually for all applications when changes to PAM occurred or a new application was installed. Now the PAM configuration is made with central configuration files and all changes are automatically inherited by the PAM configuration of each service.

The first include file (common-auth) calls three modules of the auth type: pam_env.so, pam_gnome_keyring.so and pam_unix.so. See Example 2.2, “Default Configuration for the auth Section (common-auth)”.

Example 2.2: Default Configuration for the auth Section (common-auth)
auth  required  pam_env.so                   1
auth  optional  pam_gnome_keyring.so         2
auth  required  pam_unix.so  try_first_pass 3

1

pam_env.so loads /etc/security/pam_env.conf to set the environment variables as specified in this file. It can be used to set the DISPLAY variable to the correct value, because the pam_env module knows about the location from which the login is taking place.

2

pam_gnome_keyring.so checks the user's login and password against the GNOME key ring

3

pam_unix checks the user's login and password against /etc/passwd and /etc/shadow.

The whole stack of auth modules is processed before sshd gets any feedback about whether the login has succeeded. All modules of the stack having the required control flag must be processed successfully before sshd receives a message about the positive result. If one of the modules is not successful, the entire module stack is still processed and only then is sshd notified about the negative result.

When all modules of the auth type have been successfully processed, another include statement is processed, in this case, that in Example 2.3, “Default Configuration for the account Section (common-account)”. common-account contains only one module, pam_unix. If pam_unix returns the result that the user exists, sshd receives a message announcing this success and the next stack of modules (password) is processed, shown in Example 2.4, “Default Configuration for the password Section (common-password)”.

Example 2.3: Default Configuration for the account Section (common-account)
account  required  pam_unix.so  try_first_pass
Example 2.4: Default Configuration for the password Section (common-password)
password  requisite  pam_cracklib.so
password  optional   pam_gnome_keyring.so  use_authtok
password  required   pam_unix.so  use_authtok nullok shadow try_first_pass

Again, the PAM configuration of sshd involves only an include statement referring to the default configuration for password modules located in common-password. These modules must successfully be completed (control flags requisite and required) whenever the application requests the change of an authentication token.

Changing a password or another authentication token requires a security check. This is achieved with the pam_cracklib module. The pam_unix module used afterward carries over any old and new passwords from pam_cracklib, so the user does not need to authenticate again after changing the password. This procedure makes it impossible to circumvent the checks carried out by pam_cracklib. Whenever the account or the auth type are configured to complain about expired passwords, the password modules should also be used.

Example 2.5: Default Configuration for the session Section (common-session)
session  required  pam_limits.so
session  required  pam_unix.so  try_first_pass
session  optional  pam_umask.so
session  optional  pam_systemd.so
session  optional  pam_gnome_keyring.so auto_start only_if=gdm,gdm-password,lxdm,lightdm
session  optional  pam_env.so

As the final step, the modules of the session type (bundled in the common-session file) are called to configure the session according to the settings for the user in question. The pam_limits module loads the file /etc/security/limits.conf, which may define limits on the use of certain system resources. The pam_unix module is processed again. The pam_umask module can be used to set the file mode creation mask. Since this module carries the optional flag, a failure of this module would not affect the successful completion of the entire session module stack. The session modules are called a second time when the user logs out.

2.4 Configuration of PAM Modules

Some PAM modules are configurable. The configuration files are located in /etc/security. This section briefly describes the configuration files relevant to the sshd example—pam_env.conf and limits.conf.

2.4.1 pam_env.conf

pam_env.conf can be used to define a standardized environment for users that is set whenever the pam_env module is called. With it, preset environment variables using the following syntax:

VARIABLE  [DEFAULT=VALUE]  [OVERRIDE=VALUE]
VARIABLE

Name of the environment variable to set.

[DEFAULT=<value>]

Default VALUE the administrator wants to set.

[OVERRIDE=<value>]

Values that may be queried and set by pam_env, overriding the default value.

A typical example of how pam_env can be used is the adaptation of the DISPLAY variable, which is changed whenever a remote login takes place. This is shown in Example 2.6, “pam_env.conf”.

Example 2.6: pam_env.conf
REMOTEHOST  DEFAULT=localhost          OVERRIDE=@{PAM_RHOST}
DISPLAY     DEFAULT=${REMOTEHOST}:0.0  OVERRIDE=${DISPLAY}

The first line sets the value of the REMOTEHOST variable to localhost, which is used whenever pam_env cannot determine any other value. The DISPLAY variable in turn contains the value of REMOTEHOST. Find more information in the comments in /etc/security/pam_env.conf.

2.4.2 pam_mount.conf.xml

The purpose of pam_mount is to mount user home directories during the login process, and to unmount them during logout in an environment where a central file server keeps all the home directories of users. With this method, it is not necessary to mount a complete /home directory where all the user home directories would be accessible. Instead, only the home directory of the user who is about to log in, is mounted.

After installing pam_mount, a template for pam_mount.conf.xml is available in /etc/security. The description of the various elements can be found in the manual page man 5 pam_mount.conf.

A basic configuration of this feature can be done with YaST. Select Network Settings › Windows Domain Membership › Expert Settings to add the file server; see Section 27.4, “Configuring Clients”.

2.4.3 limits.conf

System limits can be set on a user or group basis in limits.conf, which is read by the pam_limits module. The file allows you to set hard limits, which may not be exceeded, and soft limits, which may be exceeded temporarily. For more information about the syntax and the options, see the comments in /etc/security/limits.conf.

2.5 Configuring PAM Using pam-config

The pam-config tool helps you configure the global PAM configuration files (/etc/pam.d/common-*) and several selected application configurations. For a list of supported modules, use the pam-config --list-modules command. Use the pam-config command to maintain your PAM configuration files. Add new modules to your PAM configurations, delete other modules or modify options to these modules. When changing global PAM configuration files, no manual tweaking of the PAM setup for individual applications is required.

A simple use case for pam-config involves the following:

  1. Auto-generate a fresh Unix-style PAM configuration.  Let pam-config create the simplest possible setup which you can extend later on. The pam-config --create command creates a simple Unix authentication configuration. Pre-existing configuration files not maintained by pam-config are overwritten, but backup copies are kept as *.pam-config-backup.

  2. Add a new authentication method.  Adding a new authentication method (for example, LDAP) to your stack of PAM modules comes down to a simple pam-config --add --ldap command. LDAP is added wherever appropriate across all common-*-pc PAM configuration files.

  3. Add debugging for test purposes.  To make sure the new authentication procedure works as planned, turn on debugging for all PAM-related operations. The pam-config --add --ldap-debug turns on debugging for LDAP-related PAM operations. Find the debugging output in the systemd journal (see Chapter 16, journalctl: Query the systemd Journal).

  4. Query your setup.  Before you finally apply your new PAM setup, check if it contains all the options you wanted to add. The pam-config --query -- MODULE lists both the type and the options for the queried PAM module.

  5. Remove the debug options.  Finally, remove the debug option from your setup when you are entirely satisfied with the performance of it. The pam-config --delete --ldap-debug command turns off debugging for LDAP authentication. In case you had debugging options added for other modules, use similar commands to turn these off.

For more information on the pam-config command and the options available, refer to the manual page of pam-config(8).

2.6 Manually Configuring PAM

If you prefer to manually create or maintain your PAM configuration files, make sure to disable pam-config for these files.

When you create your PAM configuration files from scratch using the pam-config --create command, it creates symbolic links from the common-* to the common-*-pc files. pam-config only modifies the common-*-pc configuration files. Removing these symbolic links effectively disables pam-config, because pam-config only operates on the common-*-pc files and these files are not put into effect without the symbolic links.

Warning
Warning: Include pam_systemd.so into Configuration

If you are creating your own PAM configuration, make sure to include a session optional pam_systemd.so. Not including the pam_systemd.so can cause problems with systemd task limits. For details, refer to the man page of pam_systemd.so.

2.7 For More Information

In the /usr/share/doc/packages/pam directory after installing the pam-doc package, find the following additional documentation:

READMEs

In the top level of this directory, there is the modules subdirectory holding README files about the available PAM modules.

The Linux-PAM System Administrators' Guide

This document comprises everything that the system administrator should know about PAM. It discusses a range of topics, from the syntax of configuration files to the security aspects of PAM.

The Linux-PAM Module Writers' Manual

This document summarizes the topic from the developer's point of view, with information about how to write standard-compliant PAM modules.

The Linux-PAM Application Developers' Guide

This document comprises everything needed by an application developer who wants to use the PAM libraries.

The PAM Manual Pages

PAM in general and the individual modules come with manual pages that provide a good overview of the functionality of all the components.

3 Using NIS

  • Filename: security_nis.xml
  • ID: cha.nis
Abstract

When multiple Unix systems in a network access common resources, it becomes imperative that all user and group identities are the same for all machines in that network. The network should be transparent to users: their environments should not vary, regardless of which machine they are actually using. This can be done by means of NIS and NFS services. NFS distributes file systems over a network and is discussed in Chapter 26, Sharing File Systems with NFS.

NIS (Network Information Service) can be described as a database-like service that provides access to the contents of /etc/passwd, /etc/shadow, and /etc/group across networks. NIS can also be used for other purposes (making the contents of files like /etc/hosts or /etc/services available, for example), but this is beyond the scope of this introduction. People often refer to NIS as YP, because it works like the network's yellow pages.

3.1 Configuring NIS Servers

For configuring NIS servers, see the SUSE Linux Enterprise Server Administration Guide.

3.2 Configuring NIS Clients

To use NIS on a workstation, do the following:

  1. Start YaST › Network Services › NIS Client.

  2. Activate the Use NIS button.

  3. Enter the NIS domain. This is usually a domain name given by your administrator or a static IP address received by DHCP.

    Setting Domain and Address of a NIS Server
    Figure 3.1: Setting Domain and Address of a NIS Server
  4. Enter your NIS servers and separate their addresses by spaces. If you do not know your NIS server, click Find to let YaST search for any NIS servers in your domain. Depending on the size of your local network, this may be a time-consuming process. Broadcast asks for a NIS server in the local network after the specified servers fail to respond.

  5. Depending on your local installation, you may also want to activate the automounter. This option also installs additional software if required.

  6. If you do not want other hosts to be able to query which server your client is using, go to the Expert settings and disable Answer Remote Hosts. By checking Broken Server, the client is enabled to receive replies from a server communicating through an unprivileged port. For further information, see man ypbind.

  7. Click Finish to save them and return to the YaST control center. Your client is now configured with NIS.

4 Setting Up Authentication Servers and Clients Using YaST

  • Filename: security_auth.xml
  • ID: cha.security.auth
Abstract

The Authentication Server is based on LDAP and optionally Kerberos. On SUSE Linux Enterprise Desktop you can configure it with a YaST wizard.

For more information about LDAP, see Chapter 5, LDAP—A Directory Service, and about Kerberos, see Chapter 6, Network Authentication with Kerberos.

4.1 Configuring an Authentication Server

For information about configuring an Authentication Server, see the SUSE Linux Enterprise Server documentation.

4.2 Configuring an Authentication Client with YaST

YaST allows setting up authentication to clients using different modules:

4.3 SSSD

Two of the YaST modules are based on SSSD: User Logon Management and LDAP and Kerberos Authentication.

SSSD stands for System Security Services Daemon. SSSD talks to remote directory services that provide user data and provides various authentication methods, such as LDAP, Kerberos, or Active Directory (AD). It also provides an NSS (Name Service Switch) and PAM (Pluggable Authentication Module) interface.

SSSD can locally cache user data and then allow users to use the data, even if the real directory service is (temporarily) unreachable.

4.3.1 Checking the Status

After running one of the YaST authentication modules, you can check whether SSSD is running with:

systemctl status sssd
sssd.service - System Security Services Daemon
   Loaded: loaded (/usr/lib/systemd/system/sssd.service; enabled)
   Active: active (running) since Thu 2015-10-23 11:03:43 CEST; 5s ago
   [...]

4.3.2 Caching

To allow logging in when the authentication back-end is unavailable, SSSD will use its cache even if it was invalidated. This happens until the back-end is available again.

To invalidate the cache, run sss_cache -E (the command sss_cache is part of the package sssd-tools).

To completely remove the SSSD cache, run:

systemctl stop sssd
rm -f /var/lib/sss/db/*
systemctl start sssd

4.3.3 For More Information

For more information, see the SSSD man pages sssd.conf (man sssd.conf) and sssd (man sssd). There are also man pages for most SSSD modules.

5 LDAP—A Directory Service

  • Filename: security_ldap.xml
  • ID: cha.security.ldap
Abstract

The Lightweight Directory Access Protocol (LDAP) is a set of protocols designed to access and maintain information directories. LDAP can be used for user and group management, system configuration management, address management, and more. This chapter provides a basic understanding of how OpenLDAP works.

In a network environment, it is crucial to keep important information structured and to serve it quickly. A directory service keeps information available in a well-structured and searchable form.

Ideally, a central server stores the data in a directory and distributes it to all clients using a well-defined protocol. The structured data allow a wide range of applications to access them. A central repository reduces the necessary administrative effort. The use of an open and standardized protocol like LDAP ensures that as many client applications as possible can access such information.

A directory in this context is a type of database optimized for quick and effective reading and searching:

  • To make multiple concurrent reading accesses possible, the number of updates is usually very low. The number of read and write accesses is often limited to a few users with administrative privileges. In contrast, conventional databases are optimized for accepting the largest possible data volume in a short time.

  • When static data is administered, updates of the existing data sets are very rare. When working with dynamic data, especially when data sets like bank accounts or accounting are concerned, the consistency of the data is of primary importance. If an amount should be subtracted from one place to be added to another, both operations must happen concurrently, within one transaction, to ensure balance over the data stock. Traditional relational databases usually have a very strong focus on data consistency, such as the referential integrity support of transactions. Conversely, short-term inconsistencies are usually acceptable in LDAP directories. LDAP directories often do not have the same strong consistency requirements as relational databases.

The design of a directory service like LDAP is not laid out to support complex update or query mechanisms. All applications are guaranteed to access this service quickly and easily.

5.1 LDAP versus NIS

Unix system administrators traditionally use NIS (Network Information Service) for name resolution and data distribution in a network. The configuration data contained in the files group, hosts, mail, netgroup, networks, passwd, printcap, protocols, rpc, and services in the /etc directory is distributed to clients all over the network. These files can be maintained without major effort because they are simple text files. The handling of larger amounts of data, however, becomes increasingly difficult because of nonexistent structuring. NIS is only designed for Unix platforms, and is not suitable as a centralized data administration tool in heterogeneous networks.

Unlike NIS, the LDAP service is not restricted to pure Unix networks. Windows™ servers (starting with Windows 2000) support LDAP as a directory service. The application tasks mentioned above are additionally supported in non-Unix systems.

The LDAP principle can be applied to any data structure that needs to be centrally administered. A few application examples are:

  • Replacement for the NIS service

  • Mail routing (postfix)

  • Address books for mail clients, like Mozilla Thunderbird, Evolution, and Outlook

  • Administration of zone descriptions for a BIND 9 name server

  • User authentication with Samba in heterogeneous networks

This list can be extended because LDAP is extensible, unlike NIS. The clearly-defined hierarchical structure of the data simplifies the administration of large amounts of data, as it can be searched more easily.

5.2 Structure of an LDAP Directory Tree

To get background knowledge on how an LDAP server works and how the data is stored, it is vital to understand the way the data is organized on the server and how this structure enables LDAP to provide fast access to the data. To successfully operate an LDAP setup, you also need to be familiar with some basic LDAP terminology. This section introduces the basic layout of an LDAP directory tree and provides the basic terminology used with regard to LDAP. Skip this introductory section if you already have some LDAP background knowledge and only want to learn how to set up an LDAP environment in SUSE Linux Enterprise Desktop.

An LDAP directory has a tree structure. All entries (called objects) of the directory have a defined position within this hierarchy. This hierarchy is called the directory information tree (DIT). The complete path to the desired entry, which unambiguously identifies it, is called the distinguished name or DN. A single node along the path to this entry is called relative distinguished name or RDN.

The relations within an LDAP directory tree become more evident in the following example, shown in Figure 5.1, “Structure of an LDAP Directory”.

Structure of an LDAP Directory
Figure 5.1: Structure of an LDAP Directory

The complete diagram is a fictional directory information tree. The entries on three levels are depicted. Each entry corresponds to one box in the image. The complete, valid distinguished name for the fictional employee Geeko Linux, in this case, is cn=Geeko Linux,ou=doc,dc=example,dc=com. It is composed by adding the RDN cn=Geeko Linux to the DN of the preceding entry ou=doc,dc=example,dc=com.

The types of objects that can be stored in the DIT are globally determined following a Schema. The type of an object is determined by the object class. The object class determines what attributes the relevant object must or can be assigned. The Schema, therefore, must contain definitions of all object classes and attributes used in the desired application scenario. There are a few common Schemas (see RFC 2252 and 2256). The LDAP RFC defines a few commonly used Schemas (see for example, RFC4519). Additionally, Schemas are available for many other use cases (for example, Samba or NIS replacement). It is, however, possible to create custom Schemas or to use multiple Schemas complementing each other (if this is required by the environment in which the LDAP server should operate).

Table 5.1, “Commonly Used Object Classes and Attributes” offers a small overview of the object classes from core.schema and inetorgperson.schema used in the example, including required attributes (Req. Attr.) and valid attribute values.

Table 5.1: Commonly Used Object Classes and Attributes

Object Class

Meaning

Example Entry

Req. Attr.

dcObject

domainComponent (name components of the domain)

example

dc

organizationalUnit

organizationalUnit (organizational unit)

doc

ou

inetOrgPerson

inetOrgPerson (person-related data for the intranet or Internet)

Geeko Linux

sn and cn

Example 5.1, “Excerpt from schema.core” shows an excerpt from a Schema directive with explanations.

Example 5.1: Excerpt from schema.core
attributetype (2.5.4.11 NAME ( 'ou' 'organizationalUnitName') 1
       DESC 'RFC2256: organizational unit this object belongs to' 2
       SUP name ) 3

objectclass ( 2.5.6.5 NAME 'organizationalUnit' 4
       DESC 'RFC2256: an organizational unit' 5
       SUP top STRUCTURAL 6
       MUST ou 7
MAY (userPassword $ searchGuide $ seeAlso $ businessCategory 8
  $ x121Address $ registeredAddress $ destinationIndicator
  $ preferredDeliveryMethod $ telexNumber
  $ teletexTerminalIdentifier $ telephoneNumber
  $ internationaliSDNNumber $ facsimileTelephoneNumber
  $ street $ postOfficeBox $ postalCode $ postalAddress
  $ physicalDeliveryOfficeName
  $ st $ l $ description) )
  ...

The attribute type organizationalUnitName and the corresponding object class organizationalUnit serve as an example here.

1

The name of the attribute, its unique OID (object identifier) (numerical), and the abbreviation of the attribute.

2

A brief description of the attribute with DESC. The corresponding RFC, on which the definition is based, is also mentioned here.

3

SUP indicates a superordinate attribute type to which this attribute belongs.

4

The definition of the object class organizationalUnit begins—the same as in the definition of the attribute—with an OID and the name of the object class.

5

A brief description of the object class.

6

The SUP top entry indicates that this object class is not subordinate to another object class.

7

With MUST list all attribute types that must be used with an object of the type organizationalUnit.

8

With MAY list all attribute types that are permitted with this object class.

A very good introduction to the use of Schemas can be found in the OpenLDAP documentation (openldap2-doc). When installed, find it in /usr/share/doc/packages/openldap2/adminguide/guide.html.

5.3 Configuring an LDAP Client with YaST

YaST includes the module LDAP and Kerberos Client that helps define authentication scenarios involving either LDAP or Kerberos.

It can also be used to join Kerberos and LDAP separately. However, in many such cases, using this module may not be the first choice, such as for joining Active Directory (which uses a combination of LDAP and Kerberos). For more information, see Section 4.2, “Configuring an Authentication Client with YaST”.

Start the module by selecting Network Services › LDAP and Kerberos Client.

LDAP and Kerberos Client Window
Figure 5.2: LDAP and Kerberos Client Window

To configure an LDAP client, follow the procedure below:

  1. In the window LDAP and Kerberos Client, click Change Settings.

    Make sure that the tab Use a Directory as Identity Provider (LDAP) is chosen.

  2. Specify one or more LDAP server URLs, host names, or IP addresses under Enter LDAP server locations. When specifying multiple addresses, separate them with spaces.

  3. Specify the appropriate LDAP distinguished name (DN) under DN of Search Base. For example, a valid entry could be dc=example,dc=com.

  4. If your LDAP server supports TLS encryption, choose the appropriate security option under Secure LDAP Connection.

    To first ask the server whether it supports TLS encryption and be able to downgrade to an unencrypted connection if it does not, use Secure Communication via StartTLS.

  5. Activate other options as necessary:

    • You can Allow users to authenticate via LDAP and Automatically Create Home Directories on the local computer for them.

    • Use Cache LDAP Entries For Faster Response to cache LDAP entries locally. However, this bears the danger that entries can be slightly out of date.

    • Specify the types of data that should be used from the LDAP source, such as Users and Groups, Super-User Commands, and Network Disk Locations (network-shared drives that can be automatically mounted on request).

    • Specify the distinguished name (DN) and password of the user under whose name you want to bind to the LDAP directory in DN of Bind User and Password of the Bind User.

      Otherwise, if the server supports it, you can also leave both text boxes empty to bind anonymously to the server.

      Warning
      Warning: Authentication Without Encryption

      When using authentication without enabling transport encryption using TLS or StartTLS, the password will be transmitted in the clear.

    Under Extended Options, you can additionally configure timeouts for BIND operations.

  6. To check whether the LDAP connection works, click Test Connection.

  7. To leave the dialog, click OK. Then wait for the setup to complete.

    Finally, click Finish.

5.4 Configuring LDAP Users and Groups in YaST

The actual registration of user and group data differs only slightly from the procedure when not using LDAP. The following instructions relate to the administration of users. The procedure for administering groups is analogous.

  1. Access the YaST user administration with Security and Users › User and Group Management.

  2. Use Set Filter to limit the view of users to the LDAP users and enter the password for Root DN.

  3. Click Add to enter the user configuration. A dialog with four tabs opens:

    1. Specify the user's name, login name, and password in the User Data tab.

    2. Check the Details tab for the group membership, login shell, and home directory of the new user. If necessary, change the default to values that better suit your needs.

    3. Modify or accept the default Password Settings.

    4. Enter the Plug-Ins tab, select the LDAP plug-in, and click Launch to configure additional LDAP attributes assigned to the new user.

  4. Click OK to apply your settings and leave the user configuration.

The initial input form of user administration offers LDAP Options. This allows you to apply LDAP search filters to the set of available users. Alternatively open the module for configuring LDAP users and groups by selecting LDAP User and Group Configuration.

5.5 For More Information

More complex subjects (like SASL configuration or establishment of a replicating LDAP server that distributes the workload among multiple slaves) were omitted from this chapter. Find detailed information about both subjects in the OpenLDAP 2.4 Administrator's Guide—see at OpenLDAP 2.4 Administrator's Guide.

The Web site of the OpenLDAP project offers exhaustive documentation for beginner and advanced LDAP users:

OpenLDAP Faq-O-Matic

A detailed question and answer collection applying to the installation, configuration, and use of OpenLDAP. Find it at http://www.openldap.org/faq/data/cache/1.html.

Quick Start Guide

Brief step-by-step instructions for installing your first LDAP server. Find it at http://www.openldap.org/doc/admin24/quickstart.html or on an installed system in Section 2 of /usr/share/doc/packages/openldap2/guide/admin/guide.html.

OpenLDAP 2.4 Administrator's Guide

A detailed introduction to all important aspects of LDAP configuration, including access controls and encryption. See http://www.openldap.org/doc/admin24/ or, on an installed system, /usr/share/doc/packages/openldap2/guide/admin/guide.html.

Understanding LDAP

A detailed general introduction to the basic principles of LDAP: http://www.redbooks.ibm.com/redbooks/pdfs/sg244986.pdf.

Printed literature about LDAP:

  • LDAP System Administration by Gerald Carter (ISBN 1-56592-491-6)

  • Understanding and Deploying LDAP Directory Services by Howes, Smith, and Good (ISBN 0-672-32316-8)

The ultimate reference material for the subject of LDAP are the corresponding RFCs (request for comments), 2251 to 2256.

6 Network Authentication with Kerberos

  • Filename: security_kerberos.xml
  • ID: cha.security.kerberos

An open network provides no means of ensuring that a workstation can identify its users properly, except through the usual password mechanisms. In common installations, the user must enter the password each time a service inside the network is accessed. Kerberos provides an authentication method with which a user registers only once and is trusted in the complete network for the rest of the session. To have a secure network, the following requirements must be met:

  • Have all users prove their identity for each desired service and make sure that no one can take the identity of someone else.

  • Make sure that each network server also proves its identity. Otherwise an attacker might be able to impersonate the server and obtain sensitive information transmitted to the server. This concept is called mutual authentication, because the client authenticates to the server and vice versa.

Kerberos helps you meet these requirements by providing strongly encrypted authentication. Only the basic principles of Kerberos are discussed here. For detailed technical instruction, refer to the Kerberos documentation.

6.1 Kerberos Terminology

The following glossary defines some Kerberos terminology.

credential

Users or clients need to present some kind of credentials that authorize them to request services. Kerberos knows two kinds of credentials—tickets and authenticators.

ticket

A ticket is a per-server credential used by a client to authenticate at a server from which it is requesting a service. It contains the name of the server, the client's name, the client's Internet address, a time stamp, a lifetime, and a random session key. All this data is encrypted using the server's key.

authenticator

Combined with the ticket, an authenticator is used to prove that the client presenting a ticket is really the one it claims to be. An authenticator is built using the client's name, the workstation's IP address, and the current workstation's time, all encrypted with the session key known only to the client and the relevant server. An authenticator can only be used once, unlike a ticket. A client can build an authenticator itself.

principal

A Kerberos principal is a unique entity (a user or service) to which it can assign a ticket. A principal consists of the following components:

USER/INSTANCE@REALM
  • primary:  The first part of the principal. In the case of users, this is usually the same as the user name.

  • instance (optional) Additional information characterizing the primary. This string is separated from the primary by a /.

    tux@example.org and tux/admin@example.org can both exist on the same Kerberos system and are treated as different principals.

  • realm:  Specifies the Kerberos realm. Normally, your realm is your domain name in uppercase letters.

mutual authentication

Kerberos ensures that both client and server can be sure of each other's identity. They share a session key, which they can use to communicate securely.

session key

Session keys are temporary private keys generated by Kerberos. They are known to the client and used to encrypt the communication between the client and the server for which it requested and received a ticket.

replay

Almost all messages sent in a network can be eavesdropped, stolen, and resent. In the Kerberos context, this would be most dangerous if an attacker manages to obtain your request for a service containing your ticket and authenticator. The attacker could then try to resend it (replay) to impersonate you. However, Kerberos implements several mechanisms to deal with this problem.

server or service

Service is used to refer to a specific action to perform. The process behind this action is called a server.

6.2 How Kerberos Works

Kerberos is often called a third-party trusted authentication service, which means all its clients trust Kerberos's judgment of another client's identity. Kerberos keeps a database of all its users and their private keys.

To ensure Kerberos is working correctly, run both the authentication and ticket-granting server on a dedicated machine. Make sure that only the administrator can access this machine physically and over the network. Reduce the (networking) services running on it to the absolute minimum—do not even run sshd.

6.2.1 First Contact

Your first contact with Kerberos is quite similar to any login procedure at a normal networking system. Enter your user name. This piece of information and the name of the ticket-granting service are sent to the authentication server (Kerberos). If the authentication server knows you, it generates a random session key for further use between your client and the ticket-granting server. Now the authentication server prepares a ticket for the ticket-granting server. The ticket contains the following information—all encrypted with a session key only the authentication server and the ticket-granting server know:

  • The names of both, the client and the ticket-granting server

  • The current time

  • A lifetime assigned to this ticket

  • The client's IP address

  • The newly-generated session key

This ticket is then sent back to the client together with the session key, again in encrypted form, but this time the private key of the client is used. This private key is only known to Kerberos and the client, because it is derived from your user password. Now that the client has received this response, you are prompted for your password. This password is converted into the key that can decrypt the package sent by the authentication server. The package is unwrapped and password and key are erased from the workstation's memory. As long as the lifetime given to the ticket used to obtain other tickets does not expire, your workstation can prove your identity.

6.2.2 Requesting a Service

To request a service from any server in the network, the client application needs to prove its identity to the server. Therefore, the application generates an authenticator. An authenticator consists of the following components:

  • The client's principal

  • The client's IP address

  • The current time

  • A checksum (chosen by the client)

All this information is encrypted using the session key that the client has already received for this special server. The authenticator and the ticket for the server are sent to the server. The server uses its copy of the session key to decrypt the authenticator, which gives it all the information needed about the client requesting its service, to compare it to that contained in the ticket. The server checks if the ticket and the authenticator originate from the same client.

Without any security measures implemented on the server side, this stage of the process would be an ideal target for replay attacks. Someone could try to resend a request stolen off the net some time before. To prevent this, the server does not accept any request with a time stamp and ticket received previously. In addition to that, a request with a time stamp differing too much from the time the request is received is ignored.

6.2.3 Mutual Authentication

Kerberos authentication can be used in both directions. It is not only a question of the client being the one it claims to be. The server should also be able to authenticate itself to the client requesting its service. Therefore, it sends an authenticator itself. It adds one to the checksum it received in the client's authenticator and encrypts it with the session key, which is shared between it and the client. The client takes this response as a proof of the server's authenticity and they both start cooperating.

6.2.4 Ticket Granting—Contacting All Servers

Tickets are designed to be used for one server at a time. Therefore, you need to get a new ticket each time you request another service. Kerberos implements a mechanism to obtain tickets for individual servers. This service is called the ticket-granting service. The ticket-granting service is a service (like any other service mentioned before) and uses the same access protocols that have already been outlined. Any time an application needs a ticket that has not already been requested, it contacts the ticket-granting server. This request consists of the following components:

  • The requested principal

  • The ticket-granting ticket

  • An authenticator

Like any other server, the ticket-granting server now checks the ticket-granting ticket and the authenticator. If they are considered valid, the ticket-granting server builds a new session key to be used between the original client and the new server. Then the ticket for the new server is built, containing the following information:

  • The client's principal

  • The server's principal

  • The current time

  • The client's IP address

  • The newly-generated session key

The new ticket has a lifetime, which is either the remaining lifetime of the ticket-granting ticket or the default for the service. The lesser of both values is assigned. The client receives this ticket and the session key, which are sent by the ticket-granting service. But this time the answer is encrypted with the session key that came with the original ticket-granting ticket. The client can decrypt the response without requiring the user's password when a new service is contacted. Kerberos can thus acquire ticket after ticket for the client without bothering the user.

6.3 User View of Kerberos

Ideally, a user only contact with Kerberos happens during login at the workstation. The login process includes obtaining a ticket-granting ticket. At logout, a user's Kerberos tickets are automatically destroyed, which makes it difficult for anyone else to impersonate this user.

The automatic expiration of tickets can lead to a situation when a user's login session lasts longer than the maximum lifespan given to the ticket-granting ticket (a reasonable setting is 10 hours). However, the user can get a new ticket-granting ticket by running kinit. Enter the password again and Kerberos obtains access to desired services without additional authentication. To get a list of all the tickets silently acquired for you by Kerberos, run klist.

Here is a short list of applications that use Kerberos authentication. These applications can be found under /usr/lib/mit/bin or /usr/lib/mit/sbin after installing the package krb5-apps-clients. They all have the full functionality of their common Unix and Linux brothers plus the additional bonus of transparent authentication managed by Kerberos:

  • telnet, telnetd

  • rlogin

  • rsh, rcp, rshd

  • ftp, ftpd

You no longer need to enter your password for using these applications because Kerberos has already proven your identity. ssh, if compiled with Kerberos support, can even forward all the tickets acquired for one workstation to another one. If you use ssh to log in to another workstation, ssh makes sure that the encrypted contents of the tickets are adjusted to the new situation. Simply copying tickets between workstations is not sufficient because the ticket contains workstation-specific information (the IP address). XDM and GDM offer Kerberos support, too. Read more about the Kerberos network applications in Kerberos V5 UNIX User's Guide at http://web.mit.edu/kerberos.

6.4 Setting up Kerberos using LDAP and Kerberos Client

YaST includes the module LDAP and Kerberos Client that helps define authentication scenarios involving either LDAP or Kerberos.

It can also be used to join Kerberos and LDAP separately. However, in many such cases, using this module may not be the first choice, such as for joining Active Directory (which uses a combination of LDAP and Kerberos). For more information, see Section 4.2, “Configuring an Authentication Client with YaST”.

Start the module by selecting Network Services › LDAP and Kerberos Client.

LDAP and Kerberos Client Window
Figure 6.1: LDAP and Kerberos Client Window

To configure a Kerberos client, follow the procedure below:

  1. In the window LDAP and Kerberos Client, click Change Settings.

    Choose the tab Authentication via Kerberos.

    Tab Authentication via Kerberos
  2. Click Add Realm.

  3. In the appearing dialog, specify the correct Realm name. Usually, the realm name is an uppercase version of the domain name. Additionally, you can specify the following:

    • To apply mappings from the realm name to the domain name, activate Map Domain Name to the Realm and/or Map Wildcard Domain Name to the Realm.

    • You can specify the Host Name of Administration Server, the Host Name of Master Key Distribution Server and additional Key Distribution Centers.

      All of these items are optional if they can be automatically discovered via the SRV and TXT records in DNS.

    • To manually map Principals to local user names, use Custom Mappings of Principal Names to User Names.

      You can also use auth_to_local rules to supply such mappings using Custom Rules for Mapping Principal Names to User Names. For more information about using such rules, see the official documentation at https://web.mit.edu/kerberos/krb5-current/doc/admin/conf_files/krb5_conf.html#realms.

    Continue with OK.

  4. To add more realms, repeat from Step 2.

  5. Enable Kerberos users logging in and creation of home directories by activating Allow Kerberos Users to Authenticate and Automatically Create Home Directory.

  6. If you left empty the optional text boxes in Step 3, make sure to enable automatic discovery of realms and key distribution centers by activating Use DNS TXT Record to Discover Realms and Use DNS SRV Record to Discover KDC Servers.

  7. You can additionally activate the following:

    • Allow Insecure Encryption (for Windows NT) allows the encryption types listed as weak at http://web.mit.edu/kerberos/krb5-current/doc/admin/conf_files/kdc_conf.html#encryption-types.

    • Allow KDC on Other Networks to Issue Authentication Tickets allows forwarding of tickets.

    • Allow Kerberos-Enabled Services to Take on The Identity Of a User allows the use of proxies between the computer of the user and the key distribution center.

    • Issue Address-Less Tickets for Computers Behind NAT allows granting tickets to users behind networks using network address translation.

  8. To set up allowed encryption types and define the name of the keytab file which lists the names of principals and their encrypted keys, use the Extended Options.

  9. Finish with OK and Finish.

    YaST may now install extra packages.

6.5 For More Information

The official site of MIT Kerberos is http://web.mit.edu/kerberos. There, find links to any other relevant resource concerning Kerberos, including Kerberos installation, user, and administration guides.

The book Kerberos—A Network Authentication System by Brian Tung (ISBN 0-201-37924-4) offers extensive information.

7 Active Directory Support

  • Filename: security_ad_support.xml
  • ID: cha.security.ad

Active Directory* (AD) is a directory-service based on LDAP, Kerberos, and other services. It is used by Microsoft* Windows* to manage resources, services, and people. In a Microsoft Windows network, Active Directory provides information about these objects, restricts access to them, and enforces policies. SUSE® Linux Enterprise Desktop lets you join existing Active Directory domains and integrate your Linux machine into a Windows environment.

7.1 Integrating Linux and Active Directory Environments

With a Linux client (configured as an Active Directory client) that is joined to an existing Active Directory domain, benefit from various features not available on a pure SUSE Linux Enterprise Desktop Linux client:

Browsing Shared Files and Directories with SMB

GNOME Files (previously called Nautilus) supports browsing shared resources through SMB.

Sharing Files and Directories with SMB

GNOME Files supports sharing directories and files as in Windows.

Accessing and Manipulating User Data on the Windows Server

Through GNOME Files, users can access their Windows user data and can edit, create, and delete files and directories on the Windows server. Users can access their data without having to enter their password multiple times.

Offline Authentication

Users can log in and access their local data on the Linux machine even if they are offline or the Active Directory server is unavailable for other reasons.

Windows Password Change

This port of Active Directory support in Linux enforces corporate password policies stored in Active Directory. The display managers and console support password change messages and accept your input. You can even use the Linux passwd command to set Windows passwords.

Single-Sign-On through Kerberized Applications

Many desktop applications are Kerberos-enabled (kerberized), which means they can transparently handle authentication for the user without the need for password reentry at Web servers, proxies, groupware applications, or other locations.

Note
Note: Managing Unix Attributes from Windows Server* 2016 and Later

In Windows Server 2016 and later, Microsoft has removed the role IDMU/NIS Server and along with it the Unix Attributes plug-in for the Active Directory Users and Computers MMC snap-in.

However, Unix attributes can still be managed manually when Advanced Options are enabled in the Active Directory Users and Computers MMC snap-in. For more information, see https://blogs.technet.microsoft.com/activedirectoryua/2016/02/09/identity-management-for-unix-idmu-is-deprecated-in-windows-server/.

Alternatively, use the method described in Procedure 7.1, “ Joining an Active Directory Domain Using User Logon Management to complete attributes on the client side (in particular, see Step 6.c).

The following section contains technical background for most of the previously named features. For more information about file and printer sharing using Active Directory, see GNOME User Guide.

7.2 Background Information for Linux Active Directory Support

Many system components need to interact flawlessly to integrate a Linux client into an existing Windows Active Directory domain. The following sections focus on the underlying processes of the key events in Active Directory server and client interaction.

To communicate with the directory service, the client needs to share at least two protocols with the server:

LDAP

LDAP is a protocol optimized for managing directory information. A Windows domain controller with Active Directory can use the LDAP protocol to exchange directory information with the clients. To learn more about LDAP in general and about the open source port of it, OpenLDAP, refer to Chapter 5, LDAP—A Directory Service.

Kerberos

Kerberos is a third-party trusted authentication service. All its clients trust Kerberos's authorization of another client's identity, enabling kerberized single-sign-on (SSO) solutions. Windows supports a Kerberos implementation, making Kerberos SSO possible even with Linux clients. To learn more about Kerberos in Linux, refer to Chapter 6, Network Authentication with Kerberos.

Depending on which YaST module you use to set up Kerberos authentication, different client components process account and authentication data:

Solutions Based on SSSD
  • The sssd daemon is the central part of this solution. It handles all communication with the Active Directory server.

  • To gather name service information, sssd_nss is used.

  • To authenticate users, the pam_sss module for PAM is used. The creation of user homes for the Active Directory users on the Linux client is handled by pam_mkhomedir.

    For more information about PAM, see Chapter 2, Authentication with PAM.

Solution Based On Winbind (Samba)
  • The winbindd daemon is the central part of this solution. It handles all communication with the Active Directory server.

  • To gather name service information, nss_winbind is used.

  • To authenticate users, the pam_winbind module for PAM is used. The creation of user homes for the Active Directory users on the Linux client is handled by pam_mkhomedir.

    For more information about PAM, see Chapter 2, Authentication with PAM.

Figure 7.1, “Schema of Winbind-based Active Directory Authentication” highlights the most prominent components of Winbind-based Active Directory authentication.

Schema of Winbind-based Active Directory Authentication
Figure 7.1: Schema of Winbind-based Active Directory Authentication

Applications that are PAM-aware, like the login routines and the GNOME display manager, interact with the PAM and NSS layer to authenticate against the Windows server. Applications supporting Kerberos authentication (such as file managers, Web browsers, or e-mail clients) use the Kerberos credential cache to access user's Kerberos tickets, making them part of the SSO framework.

7.2.1 Domain Join

During domain join, the server and the client establish a secure relation. On the client, the following tasks need to be performed to join the existing LDAP and Kerberos SSO environment provided by the Windows domain controller. The entire join process is handled by the YaST Domain Membership module, which can be run during installation or in the installed system:

  1. The Windows domain controller providing both LDAP and KDC (Key Distribution Center) services is located.

  2. A machine account for the joining client is created in the directory service.

  3. An initial ticket granting ticket (TGT) is obtained for the client and stored in its local Kerberos credential cache. The client needs this TGT to get further tickets allowing it to contact other services, like contacting the directory server for LDAP queries.

  4. NSS and PAM configurations are adjusted to enable the client to authenticate against the domain controller.

During client boot, the winbind daemon is started and retrieves the initial Kerberos ticket for the machine account. winbindd automatically refreshes the machine's ticket to keep it valid. To keep track of the current account policies, winbindd periodically queries the domain controller.

7.2.2 Domain Login and User Homes

The login manager of GNOME (GDM) has been extended to allow the handling of Active Directory domain login. Users can choose to log in to the primary domain the machine has joined or to one of the trusted domains with which the domain controller of the primary domain has established a trust relationship.

User authentication is mediated by several PAM modules as described in Section 7.2, “Background Information for Linux Active Directory Support”. If there are errors, the error codes are translated into user-readable error messages that PAM gives at login through any of the supported methods (GDM, console, and SSH):

Password has expired

The user sees a message stating that the password has expired and needs to be changed. The system prompts for a new password and informs the user if the new password does not comply with corporate password policies (for example the password is too short, too simple, or already in the history). If a user's password change fails, the reason is shown and a new password prompt is given.

Account disabled

The user sees an error message stating that the account has been disabled and to contact the system administrator.

Account locked out

The user sees an error message stating that the account has been locked and to contact the system administrator.

Password has to be changed

The user can log in but receives a warning that the password needs to be changed soon. This warning is sent three days before that password expires. After expiration, the user cannot log in.

Invalid workstation

When a user is restricted to specific workstations and the current SUSE Linux Enterprise Desktop machine is not among them, a message appears that this user cannot log in from this workstation.

Invalid logon hours

When a user is only allowed to log in during working hours and tries to log in outside working hours, a message informs the user that logging in is not possible at that time.

Account expired

An administrator can set an expiration time for a specific user account. If that user tries to log in after expiration, the user gets a message that the account has expired and cannot be used to log in.

During a successful authentication, the client acquires a ticket granting ticket (TGT) from the Kerberos server of Active Directory and stores it in the user's credential cache. It also renews the TGT in the background, requiring no user interaction.

SUSE Linux Enterprise Desktop supports local home directories for Active Directory users. If configured through YaST as described in Section 7.3, “Configuring a Linux Client for Active Directory”, user home directories are created when a Windows/Active Directory user first logs in to the Linux client. These home directories look and feel identical to standard Linux user home directories and work independently of the Active Directory Domain Controller.

Using a local user home, it is possible to access a user's data on this machine (even when the Active Directory server is disconnected) as long as the Linux client has been configured to perform offline authentication.

7.2.3 Offline Service and Policy Support

Users in a corporate environment must have the ability to become roaming users (for example, to switch networks or even work disconnected for some time). To enable users to log in to a disconnected machine, extensive caching was integrated into the winbind daemon. The winbind daemon enforces password policies even in the offline state. It tracks the number of failed login attempts and reacts according to the policies configured in Active Directory. Offline support is disabled by default and must be explicitly enabled in the YaST Domain Membership module.

When the domain controller has become unavailable, the user can still access network resources (other than the Active Directory server itself) with valid Kerberos tickets that have been acquired before losing the connection (as in Windows). Password changes cannot be processed unless the domain controller is online. While disconnected from the Active Directory server, a user cannot access any data stored on this server. When a workstation has become disconnected from the network entirely and connects to the corporate network again later, SUSE Linux Enterprise Desktop acquires a new Kerberos ticket when the user has locked and unlocked the desktop (for example, using a desktop screen saver).

7.3 Configuring a Linux Client for Active Directory

Before your client can join an Active Directory domain, some adjustments must be made to your network setup to ensure the flawless interaction of client and server.

DNS

Configure your client machine to use a DNS server that can forward DNS requests to the Active Directory DNS server. Alternatively, configure your machine to use the Active Directory DNS server as the name service data source.

NTP

To succeed with Kerberos authentication, the client must have its time set accurately. It is highly recommended to use a central NTP time server for this purpose (this can be also the NTP server running on your Active Directory domain controller). If the clock skew between your Linux host and the domain controller exceeds a certain limit, Kerberos authentication fails and the client is logged in using the weaker NTLM (NT LAN Manager) authentication. For more details about using Active Directory for time synchronization, see Procedure 7.2, “ Joining an Active Directory Domain Using Windows Domain Membership.

Firewall

To browse your network neighborhood, either disable the firewall entirely or mark the interface used for browsing as part of the internal zone.

To change the firewall settings on your client, log in as root and start the YaST firewall module. Select Interfaces. Select your network interface from the list of interfaces and click Change. Select Internal Zone and apply your settings with OK. Leave the firewall settings with Next › Finish. To disable the firewall, check the Disable Firewall Automatic Starting option, and leave the firewall module with Next › Finish.

Active Directory Account

You cannot log in to an Active Directory domain unless the Active Directory administrator has provided you with a valid user account for that domain. Use the Active Directory user name and password to log in to the Active Directory domain from your Linux client.

7.3.1 Choosing Which YaST Module to Use for Connecting to Active Directory

YaST contains multiple modules that allow connecting to an Active Directory:

7.3.2 Joining Active Directory Using User Logon Management

The YaST module User Logon Management supports authentication at an Active Directory. Additionally, it also supports the following related authentication and identification providers:

Identification Providers
  • Delegate to third-party software library Support for legacy NSS providers via a proxy.

  • FreeIPA FreeIPA and Red Hat Enterprise Identity Management provider.

  • Generic directory service (LDAP) An LDAP provider. For more information about configuring LDAP, see man 5 sssd-ldap.

  • Local SSSD file database An SSSD-internal provider for local users.

Authentication Providers
  • Delegate to third-party software library Relay authentication to another PAM target via a proxy.

  • FreeIPA FreeIPA and Red Hat Enterprise Identity Management provider.

  • Generic Kerberos service An LDAP provider.

  • Generic directory service (LDAP) Kerberos authentication.

  • Local SSSD file database An SSSD-internal provider for local users.

  • This domain does not provide authentication service Disables authentication explicitly.

To join an Active Directory domain using SSSD and the User Logon Management module of YaST, proceed as follows:

Procedure 7.1: Joining an Active Directory Domain Using User Logon Management
  1. Open YaST.

  2. To be able to use DNS auto-discovery later, set up the Active Directory Domain Controller (the Active Directory server) as the name server for your client.

    1. In YaST, click Network Settings.

    2. Select Hostname/DNS, then enter the IP address of the Active Directory Domain Controller into the text box Name Server 1.

      Save the setting with OK.

  3. From the YaST main window, start the module User Logon Management.

    The module opens with an overview showing different network properties of your computer and the authentication method currently in use.

    Overview window showing the computer name, IP address, and its authentication setting.
    Figure 7.2: Main Window of User Logon Management
  4. To start editing, click Change Settings.

  5. Now join the domain.

    1. Click Join Domain.

    2. In the appearing dialog, specify the correct Domain name. Then specify the services to use for identity data and authentication: Select Microsoft Active Directory for both.

      Ensure that Enable the domain is activated.

      Click OK.

    3. (Optional) Usually, you can keep the default settings in the following dialog. However, there are reasons to make changes:

      • If the Local Host Name Does Not Match the Host Name Set on the Domain Controller.  Find out if the host name of your computer matches what the name your computer is known as to the Active Directory Domain Controller. In a terminal, run the command hostname, then compare its output to the configuration of the Active Directory Domain Controller.

        If the values differ, specify the host name from the Active Directory configuration under AD hostname. Otherwise, leave the appropriate text box empty.

      • If You Do Not Want to Use DNS Auto-Discovery.  Specify the Host names of Active Directory servers that you want to use. If there are multiple Domain Controllers, separate their host names with commas.

    4. To continue, click OK.

      If not all software is installed already, the computer will now install missing software. It will then check whether the configured Active Directory Domain Controller is available.

    5. If everything is correct, the following dialog should now show that it has discovered an Active Directory Server but that you are Not yet enrolled.

      In the dialog, specify the Username and Password of the Active Directory administrator account (usually Administrator).

      To make sure that the current domain is enabled for Samba, activate Overwrite Samba configuration to work with this AD.

      To enroll, click OK.

      Enrolling into a Domain
      Figure 7.3: Enrolling into a Domain
    6. You should now see a message confirming that you have enrolled successfully. Finish with OK.

  6. After enrolling, configure the client using the window Manage Domain User Logon.

    Configuration Window of User Logon Management
    Figure 7.4: Configuration Window of User Logon Management
    1. To allow logging in to the computer using login data provided by Active Directory, activate Allow Domain User Logon.

    2. (Optional) Optionally, under Enable domain data source, activate additional data sources such as information on which users are allowed to use sudo or which network drives are available.

    3. To allow Active Directory users to have home directories, activate Create Home Directories. The path for home directories can be set in multiple ways—on the client, on the server, or both ways:

      • To configure the home directory paths on the Domain Controller, set an appropriate value for the attribute UnixHomeDirectory for each user. Additionally, make sure that this attribute replicated to the global catalog. For information on achieving that under Windows, see https://support.microsoft.com/en-us/kb/248717.

      • To configure home directory paths on the client in such a way that precedence will be given to the path set on the domain controller, use the option fallback_homedir.

      • To configure home directory paths on the client in such a way that the client setting will override the server setting, use override_homedir.

      As settings on the Domain Controller are outside of the scope of this documentation, only the configuration of the client-side options will be described in the following.

      From the side bar, select Service Options › Name switch, then click Extended Options. From that window, select either fallback_homedir or override_homedir, then click Add.

      Specify a value. To have home directories follow the format /home/USER_NAME, use /home/%u. For more information about possible variables, see the man page sssd.conf (man 5 sssd.conf), section override_homedir.

      Click OK.

  7. Save the changes by clicking OK. Then make sure that the values displayed now are correct. To leave the dialog, click Cancel.

7.3.3 Joining Active Directory Using Windows Domain Membership

To join an Active Directory domain using winbind and the Windows Domain Membership module of YaST, proceed as follows:

Procedure 7.2: Joining an Active Directory Domain Using Windows Domain Membership
  1. Log in as root and start YaST.

  2. Start Network Services › Windows Domain Membership.

  3. Enter the domain to join at Domain or Workgroup in the Windows Domain Membership screen (see Figure 7.5, “Determining Windows Domain Membership”). If the DNS settings on your host are properly integrated with the Windows DNS server, enter the Active Directory domain name in its DNS format (mydomain.mycompany.com). If you enter the short name of your domain (also known as the pre–Windows 2000 domain name), YaST must rely on NetBIOS name resolution instead of DNS to find the correct domain controller.

    Determining Windows Domain Membership
    Figure 7.5: Determining Windows Domain Membership
  4. To use the SMB source for Linux authentication, activate Also Use SMB Information for Linux Authentication.

  5. To automatically create a local home directory for Active Directory users on the Linux machine, activate Create Home Directory on Login.

  6. Check Offline Authentication to allow your domain users to log in even if the Active Directory server is temporarily unavailable, or if you do not have a network connection.

  7. To change the UID and GID ranges for the Samba users and groups, select Expert Settings. Let DHCP retrieve the WINS server only if you need it. This is the case when some machines are resolved only by the WINS system.

  8. Configure NTP time synchronization for your Active Directory environment by selecting NTP Configuration and entering an appropriate server name or IP address. This step is obsolete if you have already entered the appropriate settings in the stand-alone YaST NTP configuration module.

  9. Click OK and confirm the domain join when prompted for it.

  10. Provide the password for the Windows administrator on the Active Directory server and click OK (see Figure 7.6, “Providing Administrator Credentials”).

    Providing Administrator Credentials
    Figure 7.6: Providing Administrator Credentials

After you have joined the Active Directory domain, you can log in to it from your workstation using the display manager of your desktop or the console.

Important
Important: Domain Name

Joining a domain may not succeed if the domain name ends with .local. Names ending in .local cause conflicts with Multicast DNS (MDNS) where .local is reserved for link-local host names.

Note
Note: Only Administrators Can Enroll a Computer

Only a domain administrator account, such as Administrator, can join SUSE Linux Enterprise Desktop into Active Directory.

7.3.4 Checking Active Directory Connection Status

To check whether you are successfully enrolled in an Active Directory domain, use the following commands:

  • klist shows whether the current user has a valid Kerberos ticket.

  • getent passwd shows published LDAP data for all users.

7.4 Logging In to an Active Directory Domain

Provided your machine has been configured to authenticate against Active Directory and you have a valid Windows user identity, you can log in to your machine using the Active Directory credentials. Login is supported for GNOME, the console, SSH, and any other PAM-aware application.

Important
Important: Offline Authentication

SUSE Linux Enterprise Desktop supports offline authentication, allowing you to log in to your client machine even when it is offline. See Section 7.2.3, “Offline Service and Policy Support” for details.

7.4.1 GDM

To authenticate a GNOME client machine against an Active Directory server, proceed as follows:

  1. Click Not listed.

  2. In the text box Username, enter the domain name and the Windows user name in this form: DOMAIN_NAME\USER_NAME.

  3. Enter your Windows password.

If configured to do so, SUSE Linux Enterprise Desktop creates a user home directory on the local machine on the first login of each user authenticated via Active Directory. This allows you to benefit from the Active Directory support of SUSE Linux Enterprise Desktop while still having a fully functional Linux machine at your disposal.

7.4.2 Console Login

Besides logging in to the Active Directory client machine using a graphical front-end, you can log in using the text-based console or even remotely using SSH.

To log in to your Active Directory client from a console, enter DOMAIN_NAME\USER_NAME at the login: prompt and provide the password.

To remotely log in to your Active Directory client machine using SSH, proceed as follows:

  1. At the login prompt, enter:

    ssh DOMAIN_NAME\\USER_NAME@HOST_NAME

    The \ domain and login delimiter is escaped with another \ sign.

  2. Provide the user's password.

7.5 Changing Passwords

SUSE Linux Enterprise Desktop helps the user choose a suitable new password that meets the corporate security policy. The underlying PAM module retrieves the current password policy settings from the domain controller, informing the user about the specific password quality requirements a user account typically has by means of a message on login. Like its Windows counterpart, SUSE Linux Enterprise Desktop presents a message describing:

  • Password history settings

  • Minimum password length requirements

  • Minimum password age

  • Password complexity

The password change process cannot succeed unless all requirements have been successfully met. Feedback about the password status is given both through the display managers and the console.

GDM provides feedback about password expiration and the prompt for new passwords in an interactive mode. To change passwords in the display managers, provide the password information when prompted.

To change your Windows password, you can use the standard Linux utility, passwd, instead of having to manipulate this data on the server. To change your Windows password, proceed as follows:

  1. Log in at the console.

  2. Enter passwd.

  3. Enter your current password when prompted.

  4. Enter the new password.

  5. Reenter the new password for confirmation. If your new password does not comply with the policies on the Windows server, this feedback is given to you and you are prompted for another password.

To change your Windows password from the GNOME desktop, proceed as follows:

  1. Click the Computer icon on the left edge of the panel.

  2. Select Control Center.

  3. From the Personal section, select About Me › Change Password.

  4. Enter your old password.

  5. Enter and confirm the new password.

  6. Leave the dialog with Close to apply your settings.

Part II Local Security

8 Configuring Security Settings with YaST

The YaST module Security Center and Hardening offers a central clearinghouse to configure security-related settings for SUSE Linux Enterprise Desktop. Use it to configure security aspects such as settings for the login procedure and for password creation, for boot permissions, user creation or for default file permissions. Launch it from the YaST control center by Security and Users › Security Center and Hardening. The Security Center dialog always starts with the Security Overview, and other configuration dialogs are available from the right pane.

9 Authorization with PolKit

PolKit (formerly known as PolicyKit) is an application framework that acts as a negotiator between the unprivileged user session and the privileged system context. Whenever a process from the user session tries to carry out an action in the system context, PolKit is queried. Based on its configuration—specified in a so-called policy—the answer could be yes, no, or needs authentication. Unlike classical privilege authorization programs such as sudo, PolKit does not grant root permissions to an entire session, but only to the action in question.

10 Access Control Lists in Linux

POSIX ACLs (access control lists) can be used as an expansion of the traditional permission concept for file system objects. With ACLs, permissions can be defined more flexibly than with the traditional permission concept.

11 Encrypting Partitions and Files

Encrypting files, partitions, and entire disks prevents unauthorized access to your data and protects your confidential files and documents.

12 Certificate Store

Certificates play an important role in the authentication of companies and individuals. Usually certificates are administered by the application itself. In some cases, it makes sense to share certificates between applications. The certificate store is a common ground for Firefox, Evolution, and NetworkManager. This chapter explains some details.

13 Intrusion Detection with AIDE

Securing your systems is a mandatory task for any mission-critical system administrator. Because it is impossible to always guarantee that the system is not compromised, it is very important to do extra checks regularly (for example with cron) to ensure that the system is still under your control. This is where AIDE, the Advanced Intrusion Detection Environment, comes into play.

8 Configuring Security Settings with YaST

  • Filename: security_yast2_security.xml
  • ID: cha.security.yast_security
Abstract

The YaST module Security Center and Hardening offers a central clearinghouse to configure security-related settings for SUSE Linux Enterprise Desktop. Use it to configure security aspects such as settings for the login procedure and for password creation, for boot permissions, user creation or for default file permissions. Launch it from the YaST control center by Security and Users › Security Center and Hardening. The Security Center dialog always starts with the Security Overview, and other configuration dialogs are available from the right pane.

8.1 Security Overview

The Security Overview displays a comprehensive list of the most important security settings for your system. The security status of each entry in the list is clearly visible. A green check mark indicates a secure setting while a red cross indicates an entry as being insecure. Click Help to open an overview of the setting and information on how to make it secure. To change a setting, click the corresponding link in the Status column. Depending on the setting, the following entries are available:

Enabled/Disabled

Click this entry to toggle the status of the setting to either enabled or disabled.

Configure

Click this entry to launch another YaST module for configuration. You will return to the Security Overview when leaving the module.

Unknown

A setting's status is set to unknown when the associated service is not installed. Such a setting does not represent a potential security risk.

YaST Security Center and Hardening: Security Overview
Figure 8.1: YaST Security Center and Hardening: Security Overview

8.2 Predefined Security Configurations

SUSE Linux Enterprise Desktop comes with three Predefined Security Configurations. These configurations affect all the settings available in the Security Center module. Each configuration can be modified to your needs using the dialogs available from the right pane changing its state to Custom Settings:

Workstation

A configuration for a workstation with any kind of network connection (including a connection to the Internet).

Roaming Device

This setting is designed for a laptop or tablet that connects to different networks.

Network Server

Security settings designed for a machine providing network services such as a Web server, file server, name server, etc. This set provides the most secure configuration of the predefined settings.

Custom Settings

A pre-selected Custom Settings (when opening the Predefined Security Configurations dialog) indicates that one of the predefined sets has been modified. Actively choosing this option does not change the current configuration—you will need to change it using the Security Overview.

8.3 Password Settings

Passwords that are easy to guess are a major security issue. The Password Settings dialog provides the means to ensure that only secure passwords can be used.

Check New Passwords

By activating this option, a warning will be issued if new passwords appear in a dictionary, or if they are proper names (proper nouns).

Minimum Acceptable Password Length

If the user chooses a password with a length shorter than specified here, a warning will be issued.

Number of Passwords to Remember

When password expiration is activated (via Password Age), this setting stores the given number of a user's previous passwords, preventing their reuse.

Password Encryption Method

Choose a password encryption algorithm. Normally there is no need to change the default (Blowfish).

Password Age

Activate password expiration by specifying a minimum and a maximum time limit (in days). By setting the minimum age to a value greater than 0 days, you can prevent users from immediately changing their passwords again (and in doing so circumventing the password expiration). Use the values 0 and 99999 to deactivate password expiration.

Days Before Password Expires Warning

When a password expires, the user receives a warning in advance. Specify the number of days prior to the expiration date that the warning should be issued.

8.4 Boot Settings

Configure which users can shut down the machine via the graphical login manager in this dialog. You can also specify how CtrlAltDel will be interpreted and who can hibernate the system.

8.5 Login Settings

This dialog lets you configure security-related login settings:

Delay after Incorrect Login Attempt

To make it difficult to guess a user's password by repeatedly logging in, it is recommended to delay the display of the login prompt that follows an incorrect login. Specify the value in seconds. Make sure that users who have mistyped their passwords do not need to wait too long.

Allow Remote Graphical Login

When checked, the graphical login manager (GDM) can be accessed from the network. This is a potential security risk.

8.6 User Addition

Set minimum and maximum values for user and group IDs. These default settings would rarely need to be changed.

8.7 Miscellaneous Settings

Other security settings that do not fit the above-mentioned categories are listed here:

File Permissions

SUSE Linux Enterprise Desktop comes with three predefined sets of file permissions for system files. These permission sets define whether a regular user may read log files or start certain programs. Easy file permissions are suitable for stand-alone machines. These settings allow regular users to, for example, read most system files. See the file /etc/permissions.easy for the complete configuration. The Secure file permissions are designed for multiuser machines with network access. A thorough explanation of these settings can be found in /etc/permissions.secure. The Paranoid settings are the most restrictive ones and should be used with care. See /etc/permissions.paranoid for more information.

User Launching updatedb

The program updatedb scans the system and creates a database of all file locations which can be queried with the command locate. When updatedb is run as user nobody, only world-readable files will be added to the database. When run as user root, almost all files (except the ones root is not allowed to read) will be added.

Enable Magic SysRq Keys

The magic SysRq key is a key combination that enables you to have some control over the system even when it has crashed. The complete documentation can be found at https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html.

9 Authorization with PolKit

  • Filename: security_policy_kit.xml
  • ID: cha.security.policykit
Abstract

PolKit (formerly known as PolicyKit) is an application framework that acts as a negotiator between the unprivileged user session and the privileged system context. Whenever a process from the user session tries to carry out an action in the system context, PolKit is queried. Based on its configuration—specified in a so-called policy—the answer could be yes, no, or needs authentication. Unlike classical privilege authorization programs such as sudo, PolKit does not grant root permissions to an entire session, but only to the action in question.

9.1 Conceptual Overview

PolKit works by limiting specific actions by users, by group, or by name. It then defines how those users are allowed to perform this action.

9.1.1 Available Authentication Agents

When a user starts a session (using the graphical environment or on the console), each session consists of the authority and an authentication agent. The authority is implemented as a service on the system message bus, whereas the authentication agent is used to authenticate the current user, which started the session. The current user needs to prove their authenticity, for example, using a passphrase.

Each desktop environment has its own authentication agent. Usually it is started automatically, whatever environment you choose.

9.1.2 Structure of PolKit

PolKit's configuration depends on actions and authorization rules:

Actions (file extension *.policy)

Written as XML files and located in /usr/share/polkit-1/actions. Each file defines one or more actions, and each action contains descriptions and default permissions. Although a system administrator can write their own rules, PolKit's files must not be edited.

Authorization Rules (file extension *.rules)

Written as JavaScript files and located in two places: /usr/share/polkit-1/rules.d is used for third party packages and /etc/polkit-1/rules.d for local configurations. Each rule file refers to the action specified in the action file. A rule determines what restrictions are allowed to a subset of users. For example, a rule file could overrule a restrictive permission and allow some users to allow it.

9.1.3 Available Commands

PolKit contains several commands for specific tasks (see also the specific man page for further details):

pkaction

Get details about a defined action. See Section 9.3, “Querying Privileges” for more information.

pkcheck

Checks whether a process is authorized, specified by either --process or --system-bus-name.

pkexec

Allows an authorized user to execute the specific program as another user.

pkttyagent

Starts a textual authentication agent. This agent is used if a desktop environment does not have its own authentication agent.

9.1.4 Available Policies and Supported Applications

At the moment, not all applications requiring privileges use PolKit. Find the most important policies available on SUSE® Linux Enterprise Desktop below, sorted into the categories where they are used.

PulseAudio
Set scheduling priorities for the PulseAudio daemon
CUPS
Add, remove, edit, enable or disable printers
Backup Manager
Modify schedule
GNOME
Modify system and mandatory values with GConf
Change the system time
NetworkManager
Apply and modify connections
PolKit
Read and change privileges for other users
Modify defaults
PackageKit
Update and remove packages
Change and refresh repositories
Install local files
Rollback
Import repository keys
Accepting EULAs
Setting the network proxy
System
Wake on LAN
Mount or unmount fixed, hotpluggable and encrypted devices
Eject and decrypt removable media
Enable or disable WLAN
Enable or disable Bluetooth
Device access
Stop, suspend, hibernate and restart the system
Undock a docking station
Change power-management settings
YaST
Register product
Change the system time and language

9.2 Authorization Types

Every time a PolKit-enabled process carries out a privileged operation, PolKit is asked whether this process is entitled to do so. PolKit answers according to the policy defined for this process. The answers can be yes, no, or authentication needed. By default, a policy contains implicit privileges, which automatically apply to all users. It is also possible to specify explicit privileges which apply to a specific user.

9.2.1 Implicit Privileges

Implicit privileges can be defined for any active and inactive sessions. An active session is the one in which you are currently working. It becomes inactive when you switch to another console for example. When setting implicit privileges to no, no user is authorized, whereas yes authorizes all users. However, usually it is useful to demand authentication.

A user can either authorize by authenticating as root or by authenticating as self. Both authentication methods exist in four variants:

Authentication

The user always needs to authenticate.

One Shot Authentication

The authentication is bound to the instance of the program currently running. After the program is restarted, the user is required to authenticate again.

Keep Session Authentication

The authentication dialog offers a check button Remember authorization for this session. If checked, the authentication is valid until the user logs out.

Keep Indefinitely Authentication

The authentication dialog offers a check button Remember authorization. If checked, the user needs to authenticate only once.

9.2.2 Explicit Privileges

Explicit privileges can be granted to specific users. They can either be granted without limitations, or, when using constraints, limited to an active session and/or a local console.

It is not only possible to grant privileges to a user, a user can also be blocked. Blocked users cannot carry out an action requiring authorization, even though the default implicit policy allows authorization by authentication.

9.2.3 Default Privileges

Each application supporting PolKit comes with a default set of implicit policies defined by the application's developers. Those policies are the so-called upstream defaults. The privileges defined by the upstream defaults are not necessarily the ones that are activated by default on SUSE systems. SUSE Linux Enterprise Desktop comes with a predefined set of privileges that override the upstream defaults:

/etc/polkit-default-privs.standard

Defines privileges suitable for most desktop systems. It is active by default.

/etc/polkit-default-privs.restrictive

Designed for machines administrated centrally

To switch between the two sets of default privileges, adjust the value of POLKIT_DEFAULT_PRIVS to either restrictive or standard in /etc/sysconfig/security. Then run the command set_polkit_default_privs as root.

Do not modify the two files in the list above. To define your own custom set of privileges, use /etc/polkit-default-privs.local. For details, refer to Section 9.4.3, “Modifying Configuration Files for Implicit Privileges”.

9.3 Querying Privileges

To query privileges use the command pkaction included in PolKit.

PolKit comes with command line tools for changing privileges and executing commands as another user (see Section 9.1.3, “Available Commands” for a short overview). Each existing policy has a speaking, unique name with which it can be identified. List all available policies with the command pkaction.

When invoked with no parameters, the command pkaction lists all policies. By adding the --show-overrides option, you can list all policies that differ from the default values. To reset the privileges for a given action to the (upstream) defaults, use the option --reset-defaults ACTION. See man pkaction for more information.

If you want to display the needed authorization for a given policy (for example, org.freedesktop.login1.reboot) use pkaction as follows:

pkaction -v --action-id org.freedesktop.login1.reboot
org.freedesktop.login1.reboot:
  description:       Reboot the system
  message:           Authentication is required to allow rebooting the system
  vendor:            The systemd Project
  vendor_url:        http://www.freedesktop.org/wiki/Software/systemd
  icon:
  implicit any:      auth_admin_keep
  implicit inactive: auth_admin_keep
  implicit active:   yes

The keyword auth_admin_keep means that users need to enter a passphrase.

Note
Note: Restrictions of pkaction on SUSE Linux Enterprise Desktop

pkaction always operates on the upstream defaults. Therefore it cannot be used to list or restore the defaults shipped with SUSE Linux Enterprise Desktop. To do so, refer to Section 9.5, “Restoring the Default Privileges”.

9.4 Modifying Configuration Files

Adjusting privileges by modifying configuration files is useful when you want to deploy the same set of policies to different machines, for example to the computers of a specific team. It is possible to change implicit and explicit privileges by modifying configuration files.

9.4.1 Adding Action Rules

The available actions depend on what additional packages you have installed on your system. For a quick overview, use pkaction to list all defined rules.

To get an idea, the following example describes how the command gparted (GNOME Partition Editor) is integrated into PolKit.

The file /usr/share/polkit-1/actions/org.opensuse.policykit.gparted.policy contains the following content:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE policyconfig PUBLIC
 "-//freedesktop//DTD PolicyKit Policy Configuration 1.0//EN"
 "http://www.freedesktop.org/standards/PolicyKit/1.0/policyconfig.dtd">
<policyconfig> 1

  <action id="org.opensuse.policykit.gparted"> 2
    <message>Authentication is required to run the GParted Partition Editor</message>
    <icon_name>gparted</icon_name>
    <defaults> 3
      <allow_any>auth_admin</allow_any>
      <allow_inactive>auth_admin</allow_inactive>
     < allow_active>auth_admin</allow_active>
    </defaults>
    <annotate 4
      key="org.freedesktop.policykit.exec.path">/usr/sbin/gparted</annotate>
    <annotate 4
      key="org.freedesktop.policykit.exec.allow_gui">true</annotate>
  </action>

</policyconfig>

1

Root element of the policy file.

2

Contains one single action.

3

The defaults element contains several permissions used in remote sessions like SSH, VNC (element allow_inactive), when logged directly into the machine on a TTY or X display (element allow_active), or for both (element allow_any). The value auth_admin indicates authentication is required as an administrative user.

4

The annotate element contains specific information regarding how PolKit performs an action. In this case, it contains the path to the executable and states whether a GUI is allowed to open a X display.

To add your own policy, create a .policy file with the structure above, add the appropriate value into the id attribute, and define the default permissions.

9.4.2 Adding Authorization Rules

Your own authorization rules overrule the default settings. To add your own settings, store your files under /etc/polkit-1/rules.d/.

The files in this directory start with a two-digit number, followed by a descriptive name, and end with .rules. Functions inside these files are executed in the order they have been sorted in. For example, 00-foo.rules is sorted (and hence executed) before 60-bar.rules or even 90-default-privs.rules.

Inside the file, the script checks for the specified action ID, which is defined in the .policy file. For example, if you want to allow the command gparted to be executed by any member of the admin group, check for the action ID org.opensuse.policykit.gparted:

/* Allow users in admin group to run GParted without authentication */
polkit.addRule(function(action, subject) {
    if (action.id == "org.opensuse.policykit.gparted" &&
        subject.isInGroup("admin")) {
        return polkit.Result.YES;
    }
});

Find the description of all classes and methods of the functions in the PolKit API at http://www.freedesktop.org/software/polkit/docs/latest/ref-api.html.

9.4.3 Modifying Configuration Files for Implicit Privileges

SUSE Linux Enterprise Desktop ships with two sets of default authorizations, located in /etc/polkit-default-privs.standard and /etc/polkit-default-privs.restrictive. For more information, refer to Section 9.2.3, “Default Privileges”.

Custom privileges are defined in /etc/polkit-default-privs.local. Privileges defined here will always take precedence over the ones defined in the other configuration files. To define your custom set of privileges, do the following:

  1. Open /etc/polkit-default-privs.local. To define a privilege, add a line for each policy with the following format:

    <privilege_identifier>     <any session>:<inactive session>:<active session>

    For example:

    org.freedesktop.policykit.modify-defaults     auth_admin_keep_always

    The following values are valid for the SESSION placeholders:

    yes

    grant privilege

    no

    block

    auth_self

    user needs to authenticate with own password every time the privilege is requested

    auth_self_keep_session

    user needs to authenticate with own password once per session, privilege is granted for the whole session

    auth_self_keep_always

    user needs to authenticate with own password once, privilege is granted for the current and for future sessions

    auth_admin

    user needs to authenticate with root password every time the privilege is requested

    auth_admin_keep_session

    user needs to authenticate with root password once per session, privilege is granted for the whole session

    auth_admin_keep_always

    user needs to authenticate with root password once, privilege is granted for the current and for future sessions

  2. Run as root for changes to take effect:

    # /sbin/set_polkit_default_privs
  3. Optionally check the list of all privilege identifiers with the command pkaction.

9.5 Restoring the Default Privileges

SUSE Linux Enterprise Desktop comes with a predefined set of privileges that is activated by default and thus overrides the upstream defaults. For details, refer to Section 9.2.3, “Default Privileges”.

Since the graphical PolKit tools and the command line tools always operate on the upstream defaults, SUSE Linux Enterprise Desktop includes an additional command-line tool, set_polkit_default_privs. It resets privileges to the values defined in /etc/polkit-default-privs.*. However, the command set_polkit_default_privs will only reset policies that are set to the upstream defaults.

Procedure 9.1: Restoring the SUSE Linux Enterprise Desktop Defaults
  1. Make sure /etc/polkit-default-privs.local does not contain any overrides of the default policies.

    Important
    Important: Custom Policy Configuration

    Policies defined in /etc/polkit-default-privs.local will be applied on top of the defaults during the next step.

  2. To reset all policies to the upstream defaults first and then apply the SUSE Linux Enterprise Desktop defaults:

    rm -f /var/lib/polkit/* && set_polkit_default_privs

10 Access Control Lists in Linux

  • Filename: security_acls.xml
  • ID: cha.security.acls
Abstract

POSIX ACLs (access control lists) can be used as an expansion of the traditional permission concept for file system objects. With ACLs, permissions can be defined more flexibly than with the traditional permission concept.

The term POSIX ACL suggests that this is a true POSIX (portable operating system interface) standard. The respective draft standards POSIX 1003.1e and POSIX 1003.2c have been withdrawn for several reasons. Nevertheless, ACLs (as found on many systems belonging to the Unix family) are based on these drafts and the implementation of file system ACLs (as described in this chapter) follows these two standards.

10.1 Traditional File Permissions

Find detailed information about the traditional file permissions in the GNU Coreutils Info page, Node File permissions (info coreutils "File permissions"). More advanced features are the setuid, setgid, and sticky bit.

10.1.1 The setuid Bit

In certain situations, the access permissions may be too restrictive. Therefore, Linux has additional settings that enable the temporary change of the current user and group identity for a specific action. For example, the passwd program normally requires root permissions to access /etc/passwd. This file contains some important information, like the home directories of users and user and group IDs. Thus, a normal user would not be able to change passwd, because it would be too dangerous to grant all users direct access to this file. A possible solution to this problem is the setuid mechanism. setuid (set user ID) is a special file attribute that instructs the system to execute programs marked accordingly under a specific user ID. Consider the passwd command:

-rwsr-xr-x  1 root shadow 80036 2004-10-02 11:08 /usr/bin/passwd

You can see the s that denotes that the setuid bit is set for the user permission. By means of the setuid bit, all users starting the passwd command execute it as root.

10.1.2 The setgid Bit

The setuid bit applies to users. However, there is also an equivalent property for groups: the setgid bit. A program for which this bit was set runs under the group ID under which it was saved, no matter which user starts it. Therefore, in a directory with the setgid bit, all newly created files and subdirectories are assigned to the group to which the directory belongs. Consider the following example directory:

drwxrws--- 2 tux archive 48 Nov 19 17:12  backup

You can see the s that denotes that the setgid bit is set for the group permission. The owner of the directory and members of the group archive may access this directory. Users that are not members of this group are mapped to the respective group. The effective group ID of all written files will be archive. For example, a backup program that runs with the group ID archive can access this directory even without root privileges.

10.1.3 The Sticky Bit

There is also the sticky bit. It makes a difference whether it belongs to an executable program or a directory. If it belongs to a program, a file marked in this way is loaded to RAM to avoid needing to get it from the hard disk each time it is used. This attribute is used rarely, because modern hard disks are fast enough. If this bit is assigned to a directory, it prevents users from deleting each other's files. Typical examples include the /tmp and /var/tmp directories:

drwxrwxrwt 2 root root 1160 2002-11-19 17:15 /tmp

10.2 Advantages of ACLs

Traditionally, three permission sets are defined for each file object on a Linux system. These sets include the read (r), write (w), and execute (x) permissions for each of three types of users—the file owner, the group, and other users. In addition to that, it is possible to set the set user id, the set group id, and the sticky bit. This lean concept is fully adequate for most practical cases. However, for more complex scenarios or advanced applications, system administrators formerly needed to use several workarounds to circumvent the limitations of the traditional permission concept.

ACLs can be used as an extension of the traditional file permission concept. They allow the assignment of permissions to individual users or groups even if these do not correspond to the original owner or the owning group. Access control lists are a feature of the Linux kernel and are currently supported by ReiserFS, Ext2, Ext3, JFS, and XFS. Using ACLs, complex scenarios can be realized without implementing complex permission models on the application level.

The advantages of ACLs are evident if you want to replace a Windows server with a Linux server. Some connected workstations may continue to run under Windows even after the migration. The Linux system offers file and print services to the Windows clients with Samba. With Samba supporting access control lists, user permissions can be configured both on the Linux server and in Windows with a graphical user interface (only Windows NT and later). With winbindd, part of the Samba suite, it is even possible to assign permissions to users only existing in the Windows domain without any account on the Linux server.

10.3 Definitions

User Class

The conventional POSIX permission concept uses three classes of users for assigning permissions in the file system: the owner, the owning group, and other users. Three permission bits can be set for each user class, giving permission to read (r), write (w), and execute (x).

ACL

The user and group access permissions for all kinds of file system objects (files and directories) are determined by means of ACLs.

Default ACL

Default ACLs can only be applied to directories. They determine the permissions a file system object inherits from its parent directory when it is created.

ACL Entry

Each ACL consists of a set of ACL entries. An ACL entry contains a type, a qualifier for the user or group to which the entry refers, and a set of permissions. For some entry types, the qualifier for the group or users is undefined.

10.4 Handling ACLs

Table 10.1, “ACL Entry Types” summarizes the six possible types of ACL entries, each defining permissions for a user or a group of users. The owner entry defines the permissions of the user owning the file or directory. The owning group entry defines the permissions of the file's owning group. The superuser can change the owner or owning group with chown or chgrp, in which case the owner and owning group entries refer to the new owner and owning group. Each named user entry defines the permissions of the user specified in the entry's qualifier field. Each named group entry defines the permissions of the group specified in the entry's qualifier field. Only the named user and named group entries have a qualifier field that is not empty. The other entry defines the permissions of all other users.

The mask entry further limits the permissions granted by named user, named group, and owning group entries by defining which of the permissions in those entries are effective and which are masked. If permissions exist in one of the mentioned entries and in the mask, they are effective. Permissions contained only in the mask or only in the actual entry are not effective—meaning the permissions are not granted. All permissions defined in the owner and owning group entries are always effective. The example in Table 10.2, “Masking Access Permissions” demonstrates this mechanism.

There are two basic classes of ACLs: A minimum ACL contains only the entries for the types owner, owning group, and other, which correspond to the conventional permission bits for files and directories. An extended ACL goes beyond this. It must contain a mask entry and may contain several entries of the named user and named group types.

Table 10.1: ACL Entry Types

Type

Text Form

owner

user::rwx

named user

user:name:rwx

owning group

group::rwx

named group

group:name:rwx

mask

mask::rwx

other

other::rwx

Table 10.2: Masking Access Permissions

Entry Type

Text Form

Permissions

named user

user:geeko:r-x

r-x

mask

mask::rw-

rw-

effective permissions:

r--

10.4.1 ACL Entries and File Mode Permission Bits

Figure 10.1, “Minimum ACL: ACL Entries Compared to Permission Bits” and Figure 10.2, “Extended ACL: ACL Entries Compared to Permission Bits” illustrate the two cases of a minimum ACL and an extended ACL. The figures are structured in three blocks—the left block shows the type specifications of the ACL entries, the center block displays an example ACL, and the right block shows the respective permission bits according to the conventional permission concept (for example, as displayed by ls -l). In both cases, the owner class permissions are mapped to the ACL entry owner. Other class permissions are mapped to the respective ACL entry. However, the mapping of the group class permissions is different in the two cases.

Minimum ACL: ACL Entries Compared to Permission Bits
Figure 10.1: Minimum ACL: ACL Entries Compared to Permission Bits

In the case of a minimum ACL—without mask—the group class permissions are mapped to the ACL entry owning group. This is shown in Figure 10.1, “Minimum ACL: ACL Entries Compared to Permission Bits”. In the case of an extended ACL—with mask—the group class permissions are mapped to the mask entry. This is shown in Figure 10.2, “Extended ACL: ACL Entries Compared to Permission Bits”.

Extended ACL: ACL Entries Compared to Permission Bits
Figure 10.2: Extended ACL: ACL Entries Compared to Permission Bits

This mapping approach ensures the smooth interaction of applications, regardless of whether they have ACL support. The access permissions that were assigned by means of the permission bits represent the upper limit for all other fine adjustments made with an ACL. Changes made to the permission bits are reflected by the ACL and vice versa.

10.4.2 A Directory with an ACL

With getfacl and setfacl on the command line, you can access ACLs. The usage of these commands is demonstrated in the following example.

Before creating the directory, use the umask command to define which access permissions should be masked each time a file object is created. The command umask 027 sets the default permissions by giving the owner the full range of permissions (0), denying the group write access (2), and giving other users no permissions (7). umask actually masks the corresponding permission bits or turns them off. For details, consult the umask man page.

mkdir mydir creates the mydir directory with the default permissions as set by umask. Use ls -dl mydir to check whether all permissions were assigned correctly. The output for this example is:

drwxr-x--- ... tux project3 ... mydir

With getfacl mydir, check the initial state of the ACL. This gives information like:

# file: mydir
# owner: tux
# group: project3
user::rwx
group::r-x
other::---

The first three output lines display the name, owner, and owning group of the directory. The next three lines contain the three ACL entries owner, owning group, and other. In fact, in the case of this minimum ACL, the getfacl command does not produce any information you could not have obtained with ls.

Modify the ACL to assign read, write, and execute permissions to an additional user geeko and an additional group mascots with:

setfacl -m user:geeko:rwx,group:mascots:rwx mydir

The option -m prompts setfacl to modify the existing ACL. The following argument indicates the ACL entries to modify (multiple entries are separated by commas). The final part specifies the name of the directory to which these modifications should be applied. Use the getfacl command to take a look at the resulting ACL.

# file: mydir
# owner: tux
# group: project3
user::rwx
user:geeko:rwx
group::r-x
group:mascots:rwx
mask::rwx
other::---

In addition to the entries initiated for the user geeko and the group mascots, a mask entry has been generated. This mask entry is set automatically so that all permissions are effective. setfacl automatically adapts existing mask entries to the settings modified, unless you deactivate this feature with -n. The mask entry defines the maximum effective access permissions for all entries in the group class. This includes named user, named group, and owning group. The group class permission bits displayed by ls -dl mydir now correspond to the mask entry.

drwxrwx---+ ... tux project3 ... mydir

The first column of the output contains an additional + to indicate that there is an extended ACL for this item.

According to the output of the ls command, the permissions for the mask entry include write access. Traditionally, such permission bits would mean that the owning group (here project3) also has write access to the directory mydir.

However, the effective access permissions for the owning group correspond to the overlapping portion of the permissions defined for the owning group and for the mask—which is r-x in our example (see Table 10.2, “Masking Access Permissions”). As far as the effective permissions of the owning group in this example are concerned, nothing has changed even after the addition of the ACL entries.

Edit the mask entry with setfacl or chmod. For example, use chmod g-w mydir. ls -dl mydir then shows:

drwxr-x---+ ... tux project3 ... mydir

getfacl mydir provides the following output:

# file: mydir
# owner: tux
# group: project3
user::rwx
user:geeko:rwx          # effective: r-x
group::r-x
group:mascots:rwx       # effective: r-x
mask::r-x
other::---

After executing chmod to remove the write permission from the group class bits, the output of ls is sufficient to see that the mask bits must have changed accordingly: write permission is again limited to the owner of mydir. The output of the getfacl confirms this. This output includes a comment for all those entries in which the effective permission bits do not correspond to the original permissions, because they are filtered according to the mask entry. The original permissions can be restored at any time with chmod g+w mydir.

10.4.3 A Directory with a Default ACL

Directories can have a default ACL, which is a special kind of ACL defining the access permissions that objects in the directory inherit when they are created. A default ACL affects both subdirectories and files.

10.4.3.1 Effects of a Default ACL

There are two ways in which the permissions of a directory's default ACL are passed to the files and subdirectories:

  • A subdirectory inherits the default ACL of the parent directory both as its default ACL and as an ACL.

  • A file inherits the default ACL as its ACL.

All system calls that create file system objects use a mode parameter that defines the access permissions for the newly created file system object. If the parent directory does not have a default ACL, the permission bits as defined by the umask are subtracted from the permissions as passed by the mode parameter, with the result being assigned to the new object. If a default ACL exists for the parent directory, the permission bits assigned to the new object correspond to the overlapping portion of the permissions of the mode parameter and those that are defined in the default ACL. The umask is disregarded in this case.

10.4.3.2 Application of Default ACLs

The following three examples show the main operations for directories and default ACLs:

  1. Add a default ACL to the existing directory mydir with:

    setfacl -d -m group:mascots:r-x mydir

    The option -d of the setfacl command prompts setfacl to perform the following modifications (option -m) in the default ACL.

    Take a closer look at the result of this command:

    getfacl mydir
    
    # file: mydir
    # owner: tux
    # group: project3
    user::rwx
    user:geeko:rwx
    group::r-x
    group:mascots:rwx
    mask::rwx
    other::---
    default:user::rwx
    default:group::r-x
    default:group:mascots:r-x
    default:mask::r-x
    default:other::---

    getfacl returns both the ACL and the default ACL. The default ACL is formed by all lines that start with default. Although you merely executed the setfacl command with an entry for the mascots group for the default ACL, setfacl automatically copied all other entries from the ACL to create a valid default ACL. Default ACLs do not have an immediate effect on access permissions. They only come into play when file system objects are created. These new objects inherit permissions only from the default ACL of their parent directory.

  2. In the next example, use mkdir to create a subdirectory in mydir, which inherits the default ACL.

    mkdir mydir/mysubdir
    
    getfacl mydir/mysubdir
    
    # file: mydir/mysubdir
    # owner: tux
    # group: project3
    user::rwx
    group::r-x
    group:mascots:r-x
    mask::r-x
    other::---
    default:user::rwx
    default:group::r-x
    default:group:mascots:r-x
    default:mask::r-x
    default:other::---

    As expected, the newly-created subdirectory mysubdir has the permissions from the default ACL of the parent directory. The ACL of mysubdir is an exact reflection of the default ACL of mydir. The default ACL that this directory will hand down to its subordinate objects is also the same.

  3. Use touch to create a file in the mydir directory, for example, touch mydir/myfile. ls -l mydir/myfile then shows:

    -rw-r-----+ ... tux project3 ... mydir/myfile

    The output of getfacl mydir/myfile is:

    # file: mydir/myfile
    # owner: tux
    # group: project3
    user::rw-
    group::r-x          # effective:r--
    group:mascots:r-x   # effective:r--
    mask::r--
    other::---

    touch uses a mode with the value 0666 when creating new files, which means that the files are created with read and write permissions for all user classes, provided no other restrictions exist in umask or in the default ACL (see Section 10.4.3.1, “Effects of a Default ACL”). In effect, this means that all access permissions not contained in the mode value are removed from the respective ACL entries. Although no permissions were removed from the ACL entry of the group class, the mask entry was modified to mask permissions not set in mode.

    This approach ensures the smooth interaction of applications (such as compilers) with ACLs. You can create files with restricted access permissions and subsequently mark them as executable. The mask mechanism guarantees that the right users and groups can execute them as desired.

10.4.4 The ACL Check Algorithm

A check algorithm is applied before any process or application is granted access to an ACL-protected file system object. As a basic rule, the ACL entries are examined in the following sequence: owner, named user, owning group or named group, and other. The access is handled in accordance with the entry that best suits the process. Permissions do not accumulate.

Things are more complicated if a process belongs to more than one group and would potentially suit several group entries. An entry is randomly selected from the suitable entries with the required permissions. It is irrelevant which of the entries triggers the final result access granted. Likewise, if none of the suitable group entries contain the required permissions, a randomly selected entry triggers the final result access denied.

10.5 ACL Support in Applications

ACLs can be used to implement very complex permission scenarios that meet the requirements of modern applications. The traditional permission concept and ACLs can be combined in a smart manner. The basic file commands (cp, mv, ls, etc.) support ACLs, as do Samba and Nautilus.

Unfortunately, many editors and file managers still lack ACL support. When copying files with Emacs, for example, the ACLs of these files are lost. When modifying files with an editor, the ACLs of files are sometimes preserved and sometimes not, depending on the backup mode of the editor used. If the editor writes the changes to the original file, the ACL is preserved. If the editor saves the updated contents to a new file that is subsequently renamed to the old file name, the ACLs may be lost, unless the editor supports ACLs. Except for the star archiver, there are currently no backup applications that preserve ACLs.

10.6 For More Information

For more information about ACLs, see the man pages for getfacl(1), acl(5), and setfacl(1).

11 Encrypting Partitions and Files

  • Filename: security_cryptofs.xml
  • ID: cha.security.cryptofs

Encrypting files, partitions, and entire disks prevents unauthorized access to your data and protects your confidential files and documents.

You can choose between the following encryption options:

Encrypting a Hard Disk Partition

It is possible to create an encrypted partition with YaST during installation or in an already installed system. For further info, see Section 11.1.1, “Creating an Encrypted Partition during Installation” and Section 11.1.2, “Creating an Encrypted Partition on a Running System”. This option can also be used for removable media, such as external hard disks, as described in Section 11.1.4, “Encrypting the Content of Removable Media”.

Creating an Encrypted Virtual Disk

You can create a file-based encrypted virtual disk on your hard disk or a removable medium with YaST. The encrypted virtual disk can then be used as a regular folder for storing files or directories. For more information, refer to Section 11.1.3, “Creating an Encrypted Virtual Disk”.

Encrypting Home Directories

With SUSE Linux Enterprise Desktop, you can also create encrypted user home directories. When the user logs in to the system, the encrypted home directory is mounted and the contents are made available to the user. Refer to Section 11.2, “Using Encrypted Home Directories” for more information.

Encrypting Single Files with GPG

To quickly encrypt one or several files, you can use the GPG tool. See Section 11.3, “Encrypting Files with GPG” for more information.

Warning
Warning: Encryption Offers Limited Protection

Encryption methods described in this chapter cannot protect your running system from being compromised. After the encrypted volume is successfully mounted, everybody with appropriate permissions can access it. However, encrypted media are useful in case of loss or theft of your computer, or to prevent unauthorized individuals from reading your confidential data.

11.1 Setting Up an Encrypted File System with YaST

Use YaST to encrypt partitions or parts of your file system during installation or in an already installed system. However, encrypting a partition in an already-installed system is more difficult, because you need to resize and change existing partitions. In such cases, it may be more convenient to create an encrypted file of a defined size, in which to store other files or parts of your file system. To encrypt an entire partition, dedicate a partition for encryption in the partition layout. The standard partitioning proposal as suggested by YaST, does not include an encrypted partition by default. Add it manually in the partitioning dialog.

11.1.1 Creating an Encrypted Partition during Installation

Warning
Warning: Password Input

Make sure to memorize the password for your encrypted partitions well. Without that password, you cannot access or restore the encrypted data.

The YaST expert dialog for partitioning offers the options needed for creating an encrypted partition. To create a new encrypted partition proceed as follows:

  1. Run the YaST Expert Partitioner with System › Partitioner.

  2. Select a hard disk, click Add, and select a primary or an extended partition.

  3. Select the partition size or the region to use on the disk.

  4. Select the file system, and mount point of this partition.

  5. Activate the Encrypt device check box.

    Note
    Note: Additional Software Required

    After checking Encrypt device, a pop-up window asking for installing additional software may appear. Confirm to install all the required packages to ensure that the encrypted partition works well.

  6. If the encrypted file system needs to be mounted only when necessary, enable Do not mount partition in the Fstab Options. otherwise enable Mount partition and enter the mount point.

  7. Click Next and enter a password which is used to encrypt this partition. This password is not displayed. To prevent typing errors, you need to enter the password twice.

  8. Complete the process by clicking Finish. The newly-encrypted partition is now created.

During the boot process, the operating system asks for the password before mounting any encrypted partition which is set to be auto-mounted in /etc/fstab. Such a partition is then available to all users when it has been mounted.

To skip mounting the encrypted partition during start-up, press Enter when prompted for the password. Then decline the offer to enter the password again. In this case, the encrypted file system is not mounted and the operating system continues booting, blocking access to your data.

To mount an encrypted partition which is not mounted during the boot process, open a file manager and click the partition entry in the pane listing common places on your file system. You will be prompted for a password and the partition will be mounted.

When you are installing your system on a machine where partitions already exist, you can also decide to encrypt an existing partition during installation. In this case follow the description in Section 11.1.2, “Creating an Encrypted Partition on a Running System” and be aware that this action destroys all data on the existing partition.

11.1.2 Creating an Encrypted Partition on a Running System

Warning
Warning: Activating Encryption on a Running System

It is also possible to create encrypted partitions on a running system. However, encrypting an existing partition destroys all data on it, and requires re-sizing and restructuring of existing partitions.

On a running system, select System › Partitioner in the YaST control center. Click Yes to proceed. In the Expert Partitioner, select the partition to encrypt and click Edit. The rest of the procedure is the same as described in Section 11.1.1, “Creating an Encrypted Partition during Installation”.

11.1.3 Creating an Encrypted Virtual Disk

Instead of encrypting an entire disk or partition, you can use YaST to set up a file-based encrypted virtual disk. It will appear as a regular file in the file system, but can be mounted and used like a regular folder. Unlike encrypted partitions, encrypted virtual disks can be created without re-partitioning the hard disk.

To set up an encrypted virtual disk, you need to create an empty file first (this file is called loop file). In the terminal, switch to the desired directory and run the touch FILE command (where FILE is the desired name, for example: secret). It is also recommended to create an empty directory that will act as a mount point for the encrypted virtual disk. To do this, use the mkdir DIR command (replace DIR with the actual path and directory name, for example: ~/my_docs).

To set up an encrypted virtual disk, launch YaST, switch to the System section, and start Partitioner. Switch to the Crypt Files section and press Add Crypt File. Enter the path to the created loop file into the Path Name of Loop File field. Enable the Create Loop File option, specify the desired size, and press Next. In the Mount Point field, enter the path to the directory that serves as a mount point (in this example, it is ~/my_docs). Make sure that the Encrypt Device option is enabled and press Next. Provide the desired password and press Finish.

11.1.4 Encrypting the Content of Removable Media

YaST treats removable media (like external hard disks or flash disks) the same as any other storage device. Virtual disks or partitions on external media can be encrypted as described above. However, you should disable mounting at boot time, because removable media is usually connected only when the system is up and running.

If you encrypted your removable device with YaST, the GNOME desktop automatically recognizes the encrypted partition and prompts for the password when the device is detected. If you plug in a FAT-formatted removable device when running GNOME, the desktop user entering the password automatically becomes the owner of the device. For devices with a file system other than FAT, change the ownership explicitly for users other than root to give them read-write access to the device.

If you have created a virtual disk as described in Section 11.1.3, “Creating an Encrypted Virtual Disk” but with the loop file on a removable disk, then you need to mount the file manually as follows:

sudo cryptsetup luksOpen  FILE NAME
sudo mount /dev/mapper/NAME DIR

In the commands above, the FILE refers to the path to the loop file, NAME is a user-defined name, and DIR is the path to the mount point. For example:

sudo cryptsetup luksOpen /run/media/tux/usbstick/secret my_secret
sudo mount /dev/mapper/my_secret /home/tux/my_docs

11.2 Using Encrypted Home Directories

To protect data in home directories from unauthorized access, use the YaST user management module to encrypt home directories. You can create encrypted home directories for new or existing users. To encrypt or decrypt home directories of already existing users, you need to know their login password. See Section 13.3.3, “Managing Encrypted Home Directories” for instructions.

Encrypted home partitions are created within a virtual disk as described in Section 11.1.3, “Creating an Encrypted Virtual Disk”. Two files are created under /home for each encrypted home directory:

LOGIN.img

The image holding the directory

LOGIN.key

The image key, protected with the user's login password.

On login, the home directory automatically gets decrypted. Internally, it works through the PAM module called pam_mount. If you need to add an additional login method that provides encrypted home directories, you need to add this module to the respective configuration file in /etc/pam.d/. For more information, see Chapter 2, Authentication with PAM and the man page of pam_mount.

Warning
Warning: Security Restrictions

Encrypting a user's home directory does not provide strong security from other users. If strong security is required, the system should not be shared physically.

To enhance security, also encrypt the swap partition and the /tmp and /var/tmp directories, because these may contain temporary images of critical data. You can encrypt swap, /tmp, and /var/tmp with the YaST partitioner as described in Section 11.1.1, “Creating an Encrypted Partition during Installation” or Section 11.1.3, “Creating an Encrypted Virtual Disk”.

11.3 Encrypting Files with GPG

The GPG encryption software can be used to encrypt individual files and documents.

To encrypt a file with GPG, you need to generate a key pair first. To do this, run the gpg --gen-key and follow the on-screen instructions. When generating the key pair, GPG creates a user ID (UID) to identify the key based on your real name, comments, and email address. You need this UID (or just a part of it like your first name or email address) to specify the key you want to use to encrypt a file. To find the UID of an existing key, use the gpg --list-keys command. To encrypt a file use the following command:

gpg -e -r UID
  FILE

Replace UID with part of the UID (for example, your first name) and FILE with the file you want to encrypt. For example:

gpg -e -r Tux secret.txt

This command creates an encrypted version of the specified file recognizable by the .gpg file extension (in this example, it is secret.txt.gpg).

To decrypt an encrypted file, use the following command:

gpg -d -o DECRYPTED_FILE
  ENCRYPTED_FILE

Replace DECRYPTED_FILE with the desired name for the decrypted file and ENCRYPTED_FILE with the encrypted file you want to decrypt.

Keep that the encrypted file can be only decrypted using the same key that was used for encryption. If you want to share an encrypted file with another person, you have to use that person's public key to encrypt the file.

12 Certificate Store

  • Filename: security_certificatestore.xml
  • ID: cha.certstore
Abstract

Certificates play an important role in the authentication of companies and individuals. Usually certificates are administered by the application itself. In some cases, it makes sense to share certificates between applications. The certificate store is a common ground for Firefox, Evolution, and NetworkManager. This chapter explains some details.

The certificate store is a common database for Firefox, Evolution, and NetworkManager at the moment. Other applications that use certificates are not covered but may be in the future. If you have such an application, you can continue to use its private, separate configuration.

12.1 Activating Certificate Store

The configuration is mostly done in the background. To activate it, proceed as follows:

  1. Decide if you want to activate the certificate store globally (for every user on your system) or specifically to a certain user:

    • For every user.  Use the file /etc/profile.local

    • For a specific user.  Use the file ~/.bashrc

  2. Open the file from the previous step and insert the following line:

    export NSS_USE_SHARED_DB=1

    Save the file

  3. Log out of and log in to your desktop.

All the certificates are stored under $HOME/.local/var/pki/nssdb/.

12.2 Importing Certificates

To import a certificate into the certificate store, do the following:

  1. Start Firefox.

  2. Open the dialog from Edit › Preferences. Change to Advanced › Encryption and click View Certificates.

  3. Import your certificate depending on your type: use Servers to import server certificate, People to identify other, and Your Certificates to identify yourself.

13 Intrusion Detection with AIDE

  • Filename: security_aide.xml
  • ID: cha.aide
Abstract

Securing your systems is a mandatory task for any mission-critical system administrator. Because it is impossible to always guarantee that the system is not compromised, it is very important to do extra checks regularly (for example with cron) to ensure that the system is still under your control. This is where AIDE, the Advanced Intrusion Detection Environment, comes into play.

13.1 Why Use AIDE?

An easy check that often can reveal unwanted changes can be done by means of RPM. The package manager has a built-in verify function that checks all the managed files in the system for changes. To verify of all files, run the command rpm -Va. However, this command will also display changes in configuration files and you will need to do some filtering to detect important changes.

An additional problem to the method with RPM is that an intelligent attacker will modify rpm itself to hide any changes that might have been done by some kind of root-kit which allows the attacker to mask its intrusion and gain root privilege. To solve this, you should implement a secondary check that can also be run completely independent of the installed system.

13.2 Setting Up an AIDE Database

Important
Important: Initialize AIDE Database After Installation

Before you install your system, verify the checksum of your medium (see Section 34.2.1, “Checking Media”) to make sure you do not use a compromised source. After you have installed the system, initialize the AIDE database. To make sure that all went well during and after the installation, do an installation directly on the console, without any network attached to the computer. Do not leave the computer unattended or connected to any network before AIDE creates its database.

AIDE is not installed by default on SUSE Linux Enterprise Desktop. To install it, either use Computer › Install Software, or enter zypper install aide on the command line as root.

To tell AIDE which attributes of which files should be checked, use the /etc/aide.conf configuration file. It must be modified to become the actual configuration. The first section handles general parameters like the location of the AIDE database file. More relevant for local configurations are the Custom Rules and the Directories and Files sections. A typical rule looks like the following:

Binlib     = p+i+n+u+g+s+b+m+c+md5+sha1

After defining the variable Binlib, the respective check boxes are used in the files section. Important options include the following:

Table 13.1: Important AIDE Check Boxes

Option

Description

p

Check for the file permissions of the selected files or directories.

i

Check for the inode number. Every file name has a unique inode number that should not change.

n

Check for the number of links pointing to the relevant file.

u

Check if the owner of the file has changed.

g

Check if the group of the file has changed.

s

Check if the file size has changed.

b

Check if the block count used by the file has changed.

m

Check if the modification time of the file has changed.

c

Check if the files access time has changed.

md5

Check if the md5 checksum of the file has changed.

sha1

Check if the sha1 (160 Bit) checksum of the file has changed.

This is a configuration that checks for all files in /sbin with the options defined in Binlib but omits the /sbin/conf.d/ directory:

/sbin  Binlib
!/sbin/conf.d

To create the AIDE database, proceed as follows:

  1. Open /etc/aide.conf.

  2. Define which files should be checked with which check boxes. For a complete list of available check boxes, see /usr/share/doc/packages/aide/manual.html. The definition of the file selection needs some knowledge about regular expressions. Save your modifications.

  3. To check whether the configuration file is valid, run:

    aide --config-check

    Any output of this command is a hint that the configuration is not valid. For example, if you get the following output:

    aide --config-check
    35:syntax error:!
    35:Error while reading configuration:!
    Configuration error

    The error is to be expected in line 36 of /etc/aide.conf. Note that the error message contains the last successfully read line of the configuration file.

  4. Initialize the AIDE database. Run the command:

    aide -i
  5. Copy the generated database to a save location like a CD-R or DVD-R, a remote server or a flash disk for later use.

    Important
    Important:

    This step is essential as it avoids compromising your database. It is recommended to use a medium which can be written only once to prevent the database being modified. Never leave the database on the computer which you want to monitor.

13.3 Local AIDE Checks

To perform a file system check, proceed as follows:

  1. Rename the database:

    mv /var/lib/aide/aide.db.new /var/lib/aide/aide.db
  2. After any configuration change, you always need to re-initialize the AIDE database and subsequently move the newly generated database. It is also a good idea to make a backup of this database. See Section 13.2, “Setting Up an AIDE Database” for more information.

  3. Perform the check with the following command:

    aide --check

If the output is empty, everything is fine. If AIDE found changes, it displays a summary of changes, for example:

aide --check
AIDE found differences between database and filesystem!!

Summary:
  Total number of files:        1992
  Added files:                  0
  Removed files:                0
  Changed files:                1

To learn about the actual changes, increase the verbose level of the check with the parameter -V. For the previous example, this could look like the following:

aide --check -V
AIDE found differences between database and filesystem!!
Start timestamp: 2009-02-18 15:14:10

Summary:
  Total number of files:        1992
  Added files:                  0
  Removed files:                0
  Changed files:                1


---------------------------------------------------
Changed files:
---------------------------------------------------

changed: /etc/passwd

--------------------------------------------------
Detailed information about changes:
---------------------------------------------------


File: /etc/passwd
  Mtime    : 2009-02-18 15:11:02              , 2009-02-18 15:11:47
  Ctime    : 2009-02-18 15:11:02              , 2009-02-18 15:11:47

In this example, the file /etc/passwd was touched to demonstrate the effect.

13.4 System Independent Checking

To avoid risk, it is advisable to also run the AIDE binary from a trusted source. This excludes the risk that some attacker also modified the aide binary to hide its traces.

To accomplish this task, AIDE must be run from a rescue system that is independent of the installed system. With SUSE Linux Enterprise Desktop it is relatively easy to extend the rescue system with arbitrary programs, and thus add the needed functionality.

Before you can start using the rescue system, you need to provide two packages to the system. These are included with the same syntax as you would add a driver update disk to the system. For a detailed description about the possibilities of linuxrc that are used for this purpose, see http://en.opensuse.org/SDB:Linuxrc. In the following, one possible way to accomplish this task is discussed.

Procedure 13.1: Starting a Rescue System with AIDE
  1. Provide an FTP server as a second machine.

  2. Copy the packages aide and mhash to the FTP server directory, in our case /srv/ftp/. Replace the placeholders ARCH and VERSION with the corresponding values:

    cp DVD1/suse/ARCH/aideVERSION.ARCH.rpm /srv/ftp
    cp DVD1/suse/ARCH/mhashVERSION.ARCH.rpm /srv/ftp
  3. Create an info file /srv/ftp/info.txt that provides the needed boot parameters for the rescue system:

    dud:ftp://ftp.example.com/aideVERSION.ARCH.rpm
    dud:ftp://ftp.example.com/mhashVERSION.ARCH.rpm

    Replace your FTP domain name, VERSION and ARCH with the values used on your system.

  4. Restart the server that needs to go through an AIDE check with the Rescue system from your DVD. Add the following string to the boot parameters:

    info=ftp://ftp.example.com/info.txt

    This parameter tells linuxrc to also read in all information from the info.txt file.

After the rescue system has booted, the AIDE program is ready for use.

13.5 For More Information

Information about AIDE is available at the following places:

Part III Network Security

14 SSH: Secure Network Operations

In networked environments, it is often necessary to access hosts from a remote location. If a user sends login and password strings for authentication purposes as plain text, they could be intercepted and misused to gain access to that user account. This would open all the user's files to an attacker and the illegal account could be used to obtain administrator or root access, or to penetrate other systems. In the past, remote connections were established with telnet, rsh or rlogin, which offered no guards against eavesdropping in the form of encryption or other security mechanisms. There are other unprotected communication channels, like the traditional FTP protocol and some remote copying programs like rcp.

15 Masquerading and Firewalls

Whenever Linux is used in a network environment, you can use the kernel functions that allow the manipulation of network packets to maintain a separation between internal and external network areas. The Linux netfilter framework provides the means to establish an effective firewall that keeps differ…

16 Configuring a VPN Server

Today, Internet connections are cheap and available almost everywhere. However, not all connections are secure. Using a Virtual Private Network (VPN), you can create a secure network within an insecure network such as the Internet or Wi-Fi. It can be implemented in different ways and serves several purposes. In this chapter, we focus on the OpenVPN implementation to link branch offices via secure wide area networks (WANs).

17 Managing X.509 Certification

An increasing number of authentication mechanisms are based on cryptographic procedures. Digital certificates that assign cryptographic keys to their owners play an important role in this context. These certificates are used for communication and can also be found, for example, on company ID cards. The generation and administration of certificates is mostly handled by official institutions that offer this as a commercial service. In some cases, however, it may make sense to carry out these tasks yourself. For example, if a company does not want to pass personal data to third parties.

YaST provides two modules for certification, which offer basic management functions for digital X.509 certificates. The following sections explain the basics of digital certification and how to use YaST to create and administer certificates of this type.

14 SSH: Secure Network Operations

  • Filename: security_ssh.xml
  • ID: cha.ssh
Abstract

In networked environments, it is often necessary to access hosts from a remote location. If a user sends login and password strings for authentication purposes as plain text, they could be intercepted and misused to gain access to that user account. This would open all the user's files to an attacker and the illegal account could be used to obtain administrator or root access, or to penetrate other systems. In the past, remote connections were established with telnet, rsh or rlogin, which offered no guards against eavesdropping in the form of encryption or other security mechanisms. There are other unprotected communication channels, like the traditional FTP protocol and some remote copying programs like rcp.

The SSH suite provides the necessary protection by encrypting the authentication strings (usually a login name and a password) and all the other data exchanged between the hosts. With SSH, the data flow could still be recorded by a third party, but the contents are encrypted and cannot be reverted to plain text unless the encryption key is known. So SSH enables secure communication over insecure networks, such as the Internet. The SSH implementation coming with SUSE Linux Enterprise Desktop is OpenSSH.

SUSE Linux Enterprise Desktop installs the OpenSSH package by default providing the commands ssh, scp, and sftp. In the default configuration, remote access of a SUSE Linux Enterprise Desktop system is only possible with the OpenSSH utilities, and only if the sshd is running and the firewall permits access.

SSH on SUSE Linux Enterprise Desktop uses cryptographic hardware acceleration if available. As a result, the transfer of large quantities of data through an SSH connection is considerably faster than without cryptographic hardware. As an additional benefit, the CPU will see a significant reduction in load.

14.1 ssh—Secure Shell

With ssh it is possible to log in to remote systems and to work interactively. To log in to the host sun as user tux enter one of the following commands:

ssh tux@sun
ssh -l tux sun

If the user name is the same on both machines, you can omit it. Using ssh sun is sufficient. The remote host prompts for the remote user's password. After a successful authentication, you can work on the remote command line or use interactive applications, such as YaST in text mode.

Furthermore, ssh offers the possibility to run non-interactive commands on remote systems using ssh HOST COMMAND. COMMAND needs to be properly quoted. Multiple commands can be concatenated as on a local shell.

ssh root@sun "dmesg -T | tail -n 25"
ssh root@sun "cat /etc/issue && uptime"

14.1.1 Starting X Applications on a Remote Host

SSH also simplifies the use of remote X applications. If you run ssh with the -X option, the DISPLAY variable is automatically set on the remote machine and all X output is exported to the local machine over the existing SSH connection. At the same time, X applications started remotely cannot be intercepted by unauthorized individuals.

14.1.2 Agent Forwarding

By adding the -A option, the ssh-agent authentication mechanism is carried over to the next machine. This way, you can work from different machines without having to enter a password, but only if you have distributed your public key to the destination hosts and properly saved it there. Refer to Section 14.5.2, “Copying an SSH Key” for details.

This mechanism is deactivated in the default settings, but can be permanently activated at any time in the systemwide configuration file /etc/ssh/sshd_config by setting AllowAgentForwarding yes.

14.2 scp—Secure Copy

scp copies files to or from a remote machine. If the user name on jupiter is different than the user name on sun, specify the latter using the USER_NAME@host format. If the file should be copied into a directory other than the remote user's home directory, specify it as sun:DIRECTORY. The following examples show how to copy a file from a local to a remote machine and vice versa.

# local -> remote
scp ~/MyLetter.tex tux@sun:/tmp
# remote -> local
scp tux@sun:/tmp/MyLetter.tex ~
Tip
Tip: The -l Option

With the ssh command, the option -l can be used to specify a remote user (as an alternative to the USER_NAME@host format). With scp the option -l is used to limit the bandwidth consumed by scp.

After the correct password is entered, scp starts the data transfer. It displays a progress bar and the time remaining for each file that is copied. Suppress all output with the -q option.

scp also provides a recursive copying feature for entire directories. The command

scp -r src/ sun:backup/

copies the entire contents of the directory src including all subdirectories to the ~/backup directory on the host sun. If this subdirectory does not exist, it is created automatically.

The -p option tells scp to leave the time stamp of files unchanged. -C compresses the data transfer. This minimizes the data volume to transfer, but creates a heavier burden on the processors of both machines.

14.3 sftp—Secure File Transfer

14.3.1 Using sftp

If you want to copy several files from or to different locations, sftp is a convenient alternative to scp. It opens a shell with a set of commands similar to a regular FTP shell. Type help at the sftp-prompt to get a list of available commands. More details are available from the sftp man page.

sftp sun
Enter passphrase for key '/home/tux/.ssh/id_rsa':
Connected to sun.
sftp> help
Available commands:
bye                                Quit sftp
cd path                            Change remote directory to 'path'
[...]

14.3.2 Setting Permissions for File Uploads

As with a regular FTP server, a user cannot only download, but also upload files to a remote machine running an SFTP server by using the put command. By default the files will be uploaded to the remote host with the same permissions as on the local host. There are two options to automatically alter these permissions:

Setting a umask

A umask works as a filter against the permissions of the original file on the local host. It can only withdraw permissions:

Table 14.1:

permissions original

umask

permissions uploaded

0666

0002

0664

0600

0002

0600

0775

0025

0750

To apply a umask on an SFTP server, edit the file /etc/ssh/sshd_configuration. Search for the line beginning with Subsystem sftp and add the -u parameter with the desired setting, for example:

Subsystem sftp /usr/lib/ssh/sftp-server -u 0002
Explicitly Setting the Permissions

Explicitly setting the permissions sets the same permissions for all files uploaded via SFTP. Specify a three-digit pattern such as 600, 644, or 755 with -u. When both -m and -u are specified, -u is ignored.

To apply explicit permissions for uploaded files on an SFTP server, edit the file /etc/ssh/sshd_configuration. Search for the line beginning with Subsystem sftp and add the -m parameter with the desired setting, for example:

Subsystem sftp /usr/lib/ssh/sftp-server -m 600

14.4 The SSH Daemon (sshd)

To work with the SSH client programs ssh and scp, a server (the SSH daemon) must be running in the background, listening for connections on TCP/IP port 22. The daemon generates three key pairs when starting for the first time. Each key pair consists of a private and a public key. Therefore, this procedure is called public key-based. To guarantee the security of the communication via SSH, access to the private key files must be restricted to the system administrator. The file permissions are set accordingly by the default installation. The private keys are only required locally by the SSH daemon and must not be given to anyone else. The public key components (recognizable by the name extension .pub) are sent to the client requesting the connection. They are readable for all users.

A connection is initiated by the SSH client. The waiting SSH daemon and the requesting SSH client exchange identification data to compare the protocol and software versions, and to prevent connections through the wrong port. Because a child process of the original SSH daemon replies to the request, several SSH connections can be made simultaneously.

For the communication between SSH server and SSH client, OpenSSH supports versions 1 and 2 of the SSH protocol. Version 2 of the SSH protocol is used by default. Override this to use version 1 of protocol with the -1 option.

When using version 1 of SSH, the server sends its public host key and a server key, which is regenerated by the SSH daemon every hour. Both allow the SSH client to encrypt a freely chosen session key, which is sent to the SSH server. The SSH client also tells the server which encryption method (cipher) to use. Version 2 of the SSH protocol does not require a server key. Both sides use an algorithm according to Diffie-Hellman to exchange their keys.

The private host and server keys are absolutely required to decrypt the session key and cannot be derived from the public parts. Only the contacted SSH daemon can decrypt the session key using its private keys. This initial connection phase can be watched closely by turning on verbose debugging using the -v option of the SSH client.

Tip
Tip: Viewing the SSH Daemon Log File

To watch the log entries from the sshd use the following command:

tux > sudo journalctl -u sshd

14.4.1 Maintaining SSH Keys

It is recommended to back up the private and public keys stored in /etc/ssh/ in a secure, external location. In this way, key modifications can be detected or the old ones can be used again after having installed a new system.

Tip
Tip: Existing SSH Host Keys

If you install SUSE Linux Enterprise Desktop on a machine with existing Linux installations, the installation routine automatically imports the SSH host key with the most recent access time from an existing installation.

When establishing a secure connection with a remote host for the first time, the client stores all public host keys in ~/.ssh/known_hosts. This prevents any man-in-the-middle attacks—attempts by foreign SSH servers to use spoofed names and IP addresses. Such attacks are detected either by a host key that is not included in ~/.ssh/known_hosts, or by the server's inability to decrypt the session key in the absence of an appropriate private counterpart.

If the public keys of a host have changed (that needs to be verified before connecting to such a server), the offending keys can be removed with ssh-keygen -r HOSTNAME.

14.4.2 Rotating Host Keys

As of version 6.8, OpenSSH comes with a protocol extension that supports host key rotation. It makes sense to replace keys, if you are still using weak keys such as 1024-bit RSA keys. It is strongly recommended to replace such a key and go for 2048-bit DSA keys or something even better. The client will then use the best host key.

Tip
Tip: Restarting sshd

After installing new host keys on the server, restart sshd.

This protocol extension can inform a client of all the new host keys on the server, if the user initiates a connection with ssh. Then, the software on the client updates ~/.ssh/known_hosts, and the user is not required to accept new keys of previously known and trusted hosts manually. The local known_hosts file will contain all the host keys of the remote hosts, in addition to the one that authenticated the host during this session.

Once the administrator of the server knows that all the clients have fetched the new keys, they can remove the old keys. The protocol extension ensures that the obsolete keys will be removed from the client's configuration, too. The key removal occurs while initiating an ssh session.

For more information, see:

14.5 SSH Authentication Mechanisms

In its simplest form, authentication is done by entering the user's password just as if logging in locally. However, having to memorize passwords of several users on remote machines is inefficient. What is more, these passwords may change. On the other hand—when granting root access—an administrator needs to be able to quickly revoke such a permission without having to change the root password.

To accomplish a login that does not require to enter the remote user's password, SSH uses another key pair, which needs to be generated by the user. It consists of a public (id_rsa.pub or id_dsa.pub) and a private key (id_rsa or id_dsa).

To be able to log in without having to specify the remote user's password, the public key of the SSH user must be in ~/.ssh/authorized_keys. This approach also ensures that the remote user has got full control: adding the key requires the remote user's password and removing the key revokes the permission to log in from remote.

For maximum security such a key should be protected by a passphrase which needs to be entered every time you use ssh, scp, or sftp. Contrary to the simple authentication, this passphrase is independent from the remote user and therefore always the same.

An alternative to the key-based authentication described above, SSH also offers a host-based authentication. With host-based authentication, users on a trusted host can log in to another host on which this feature is enabled using the same user name. SUSE Linux Enterprise Desktop is set up for using key-based authentication, covering setting up host-based authentication on SUSE Linux Enterprise Desktop is beyond the scope of this manual.

Note
Note: File Permissions for Host-Based Authentication

If the host-based authentication is to be used, the file /usr/lib/ssh/ssh-keysign (32-bit systems) or /usr/lib64/ssh/ssh-keysign (64-bit systems) should have the setuid bit set, which is not the default setting in SUSE Linux Enterprise Desktop. In such case, set the file permissions manually. You should use /etc/permissions.local for this purpose, to make sure that the setuid bit is preserved after security updates of openssh.

14.5.1 Generating an SSH Key

  1. To generate a key with default parameters (RSA, 2048 bits), enter the command ssh-keygen.

  2. Accept the default location to store the key (~/.ssh/id_rsa) by pressing Enter (strongly recommended) or enter an alternative location.

  3. Enter a passphrase consisting of 10 to 30 characters. The same rules as for creating safe passwords apply. It is strongly advised to refrain from specifying no passphrase.

You should make absolutely sure that the private key is not accessible by anyone other than yourself (always set its permissions to 0600). The private key must never fall into the hands of another person.

To change the password of an existing key pair, use the command ssh-keygen -p.

14.5.2 Copying an SSH Key

To copy a public SSH key to ~/.ssh/authorized_keys of a user on a remote machine, use the command ssh-copy-id. To copy your personal key stored under ~/.ssh/id_rsa.pub you may use the short form. To copy DSA keys or keys of other users, you need to specify the path:

# ~/.ssh/id_rsa.pub
ssh-copy-id -i tux@sun

# ~/.ssh/id_dsa.pub
ssh-copy-id -i ~/.ssh/id_dsa.pub  tux@sun

# ~notme/.ssh/id_rsa.pub
ssh-copy-id -i ~notme/.ssh/id_rsa.pub  tux@sun

To successfully copy the key, you need to enter the remote user's password. To remove an existing key, manually edit ~/.ssh/authorized_keys.

14.5.3 Using the ssh-agent

When doing lots of secure shell operations it is cumbersome to type the SSH passphrase for each such operation. Therefore, the SSH package provides another tool, ssh-agent, which retains the private keys for the duration of an X or terminal session. All other windows or programs are started as clients to the ssh-agent. By starting the agent, a set of environment variables is set, which will be used by ssh, scp, or sftp to locate the agent for automatic login. See the ssh-agent man page for details.

After the ssh-agent is started, you need to add your keys by using ssh-add. It will prompt for the passphrase. After the password has been provided once, you can use the secure shell commands within the running session without having to authenticate again.

14.5.3.1 Using ssh-agent in an X Session

On SUSE Linux Enterprise Desktop, the ssh-agent is automatically started by the GNOME display manager. To also invoke ssh-add to add your keys to the agent at the beginning of an X session, do the following:

  1. Log in as the desired user and check whether the file ~/.xinitrc exists.

  2. If it does not exist, use an existing template or copy it from /etc/skel:

    if [ -f ~/.xinitrc.template ]; then mv ~/.xinitrc.template ~/.xinitrc; \
    else cp /etc/skel/.xinitrc.template ~/.xinitrc; fi
  3. If you have copied the template, search for the following lines and uncomment them. If ~/.xinitrc already existed, add the following lines (without comment signs).

    # if test -S "$SSH_AUTH_SOCK" -a -x "$SSH_ASKPASS"; then
    #       ssh-add < /dev/null
    # fi
  4. When starting a new X session, you will be prompted for your SSH passphrase.

14.5.3.2 Using ssh-agent in a Terminal Session

In a terminal session you need to manually start the ssh-agent and then call ssh-add afterward. There are two ways to start the agent. The first example given below starts a new Bash shell on top of your existing shell. The second example starts the agent in the existing shell and modifies the environment as needed.

ssh-agent -s /bin/bash
eval $(ssh-agent)

After the agent has been started, run ssh-add to provide the agent with your keys.

14.6 Port Forwarding

ssh can also be used to redirect TCP/IP connections. This feature, also called SSH tunneling, redirects TCP connections to a certain port to another machine via an encrypted channel.

With the following command, any connection directed to jupiter port 25 (SMTP) is redirected to the SMTP port on sun. This is especially useful for those using SMTP servers without SMTP-AUTH or POP-before-SMTP features. From any arbitrary location connected to a network, e-mail can be transferred to the home mail server for delivery.

ssh -L 25:sun:25 jupiter

Similarly, all POP3 requests (port 110) on jupiter can be forwarded to the POP3 port of sun with this command:

ssh -L 110:sun:110 jupiter

Both commands must be executed as root, because the connection is made to privileged local ports. E-mail is sent and retrieved by normal users in an existing SSH connection. The SMTP and POP3 host must be set to localhost for this to work. Additional information can be found in the manual pages for each of the programs described above and in the OpenSSH package documentation under /usr/share/doc/packages/openssh.

14.7 For More Information

http://www.openssh.com

The home page of OpenSSH

http://en.wikibooks.org/wiki/OpenSSH

The OpenSSH Wikibook

man sshd

The man page of the OpenSSH daemon

man ssh_config

The man page of the OpenSSH SSH client configuration files

man scp , man sftp , man slogin , man ssh , man ssh-add , man ssh-agent , man ssh-copy-id , man ssh-keyconvert , man ssh-keygen , man ssh-keyscan

Man pages of several binary files to securely copy files (scp, sftp), to log in (slogin, ssh), and to manage keys.

/usr/share/doc/packages/openssh/README.SUSE , /usr/share/doc/packages/openssh/README.FIPS

SUSE package specific documentation; changes in defaults with respect to upstream, notes on FIPS mode etc.

15 Masquerading and Firewalls

  • Filename: security_firewall.xml
  • ID: cha.security.firewall

Whenever Linux is used in a network environment, you can use the kernel functions that allow the manipulation of network packets to maintain a separation between internal and external network areas. The Linux netfilter framework provides the means to establish an effective firewall that keeps different networks apart. Using iptables—a generic table structure for the definition of rule sets—precisely controls the packets allowed to pass a network interface. Such a packet filter can be set up using SuSEFirewall2 and the corresponding YaST module.

15.1 Packet Filtering with iptables

The components netfilter and iptables are responsible for the filtering and manipulation of network packets and for network address translation (NAT). The filtering criteria and any actions associated with them are stored in chains, which must be matched one after another by individual network packets as they arrive. The chains to match are stored in tables. The iptables command allows you to alter these tables and rule sets.

The Linux kernel maintains three tables, each for a particular category of functions of the packet filter:

filter

This table holds the bulk of the filter rules, because it implements the packet filtering mechanism in the stricter sense, which determines whether packets are let through (ACCEPT) or discarded (DROP), for example.

nat

This table defines any changes to the source and target addresses of packets. Using these functions also allows you to implement masquerading, which is a special case of NAT used to link a private network with the Internet.

mangle

The rules held in this table make it possible to manipulate values stored in IP headers (such as the type of service).

These tables contain several predefined chains to match packets:

PREROUTING

This chain is applied to incoming packets.

INPUT

This chain is applied to packets destined for the system's internal processes.

FORWARD

This chain is applied to packets that are only routed through the system.

OUTPUT

This chain is applied to packets originating from the system itself.

POSTROUTING

This chain is applied to all outgoing packets.

Figure 15.1, “iptables: A Packet's Possible Paths” illustrates the paths along which a network packet may travel on a given system. For the sake of simplicity, the figure lists tables as parts of chains, but in reality these chains are held within the tables themselves.

In the simplest case, an incoming packet destined for the system itself arrives at the eth0 interface. The packet is first referred to the PREROUTING chain of the mangle table then to the PREROUTING chain of the nat table. The following step, concerning the routing of the packet, determines that the actual target of the packet is a process of the system itself. After passing the INPUT chains of the mangle and the filter table, the packet finally reaches its target, provided that the rules of the filter table are actually matched.

iptables: A Packet's Possible Paths
Figure 15.1: iptables: A Packet's Possible Paths

15.2 Masquerading Basics

Masquerading is the Linux-specific form of NAT (network address translation) and can be used to connect a small LAN with the Internet. LAN hosts use IP addresses from the private range (see Section 17.1.2, “Netmasks and Routing”) and on the Internet official IP addresses are used. To be able to connect to the Internet, a LAN host's private address is translated to an official one. This is done on the router, which acts as the gateway between the LAN and the Internet. The underlying principle is a simple one: The router has more than one network interface, typically a network card and a separate interface connecting with the Internet. While the latter links the router with the outside world, one or several others link it with the LAN hosts. With these hosts in the local network connected to the network card (such as eth0) of the router, they can send any packets not destined for the local network to their default gateway or router.

Important
Important: Using the Correct Network Mask

When configuring your network, make sure both the broadcast address and the netmask are the same for all local hosts. Failing to do so prevents packets from being routed properly.

As mentioned, whenever one of the LAN hosts sends a packet destined for an Internet address, it goes to the default router. However, the router must be configured before it can forward such packets. For security reasons, this is not enabled in a default installation. To enable it, set the variable IP_FORWARD in the file /etc/sysconfig/sysctl to IP_FORWARD=yes.

The target host of the connection can see your router, but knows nothing about the host in your internal network where the packets originated. This is why the technique is called masquerading. Because of the address translation, the router is the first destination of any reply packets. The router must identify these incoming packets and translate their target addresses, so packets can be forwarded to the correct host in the local network.

With the routing of inbound traffic depending on the masquerading table, there is no way to open a connection to an internal host from the outside. For such a connection, there would be no entry in the table. In addition, any connection already established has a status entry assigned to it in the table, so the entry cannot be used by another connection.

As a consequence of all this, you might experience some problems with several application protocols, such as ICQ, cucme, IRC (DCC, CTCP), and FTP (in PORT mode). Web browsers, the standard FTP program, and many other programs use the PASV mode. This passive mode is much less problematic as far as packet filtering and masquerading are concerned.

15.3 Firewalling Basics

Firewall is probably the term most widely used to describe a mechanism that provides and manages a link between networks while also controlling the data flow between them. Strictly speaking, the mechanism described in this section is called a packet filter. A packet filter regulates the data flow according to certain criteria, such as protocols, ports, and IP addresses. This allows you to block packets that, according to their addresses, are not supposed to reach your network. To allow public access to your Web server, for example, explicitly open the corresponding port. However, a packet filter does not scan the contents of packets with legitimate addresses, such as those directed to your Web server. For example, if incoming packets were intended to compromise a CGI program on your Web server, the packet filter would still let them through.

A more effective but more complex mechanism is the combination of several types of systems, such as a packet filter interacting with an application gateway or proxy. In this case, the packet filter rejects any packets destined for disabled ports. Only packets directed to the application gateway are accepted. This gateway or proxy pretends to be the actual client of the server. In a sense, such a proxy could be considered a masquerading host on the protocol level used by the application. One example for such a proxy is Squid, an HTTP and FTP proxy server. To use Squid, the browser must be configured to communicate via the proxy. Any HTTP pages or FTP files requested are served from the proxy cache and objects not found in the cache are fetched from the Internet by the proxy.

The following section focuses on the packet filter that comes with SUSE Linux Enterprise Desktop. For further information about packet filtering and firewalling, read the Firewall HOWTO.

15.4 SuSEFirewall2

SuSEFirewall2 is a script that reads the variables set in /etc/sysconfig/SuSEfirewall2 to generate a set of iptables rules. It defines three security zones, although only the first and the second one are considered in the following sample configuration:

External Zone

Given that there is no way to control what is happening on the external network, the host needs to be protected from it. Usually, the external network is the Internet, but it could be another insecure network, such as a Wi-Fi.

Internal Zone

This refers to the private network, usually the LAN. If the hosts on this network use IP addresses from the private range (see Section 17.1.2, “Netmasks and Routing”), enable network address translation (NAT), so hosts on the internal network can access the external one. All ports are open in the internal zone. The main benefit of putting interfaces into the internal zone (rather than stopping the firewall) is that the firewall still runs, so when you add new interfaces, they will be put into the external zone by default. That way an interface is not accidentally open by default.

Demilitarized Zone (DMZ)

While hosts located in this zone can be reached both from the external and the internal network, they cannot access the internal network themselves. This setup can be used to put an additional line of defense in front of the internal network, because the DMZ systems are isolated from the internal network.

Note
Note: No Zone Assigned Behavior

By default, all network interfaces are set to no zone assigned. This mode behaves as the External Zone profile.

Any kind of network traffic not explicitly allowed by the filtering rule set is suppressed by iptables. Therefore, each of the interfaces with incoming traffic must be placed into one of the three zones. For each of the zones, define the services or protocols allowed. The rule set is only applied to packets originating from remote hosts. Locally generated packets are not captured by the firewall.

The configuration can be performed with YaST (see Section 15.4.1, “Configuring the Firewall with YaST”). It can also be made manually in the file /etc/sysconfig/SuSEfirewall2, which is well commented. Additionally, several example scenarios are available in /usr/share/doc/packages/SuSEfirewall2/EXAMPLES.

15.4.1 Configuring the Firewall with YaST

15.4.1.1 Opening Ports

In case your network interfaces are located in a firewall zone where network traffic is blocked on most ports, services that manage their network traffic via a blocked port, will not work. For example, SSH is a popular service that uses port 22. By default, this port is blocked on interfaces located in the external or demilitarized zone. To make SSH work, you need to open port 22 in the firewall configuration. This can be done with the YaST module Firewall.

Firewall Configuration: Allowed Services
Figure 15.2: Firewall Configuration: Allowed Services
Important
Important: Automatic Firewall Configuration

After the installation, YaST automatically starts a firewall on all configured interfaces. If a server is configured and activated on the system, YaST can modify the automatically generated firewall configuration with the options Open Ports on Selected Interface in Firewall or Open Ports on Firewall in the server configuration modules. Some server module dialogs include a Firewall Details button for activating additional services and ports. The YaST firewall configuration module can be used to activate, deactivate, or reconfigure the firewall.

Procedure 15.1: Manually Open Firewall Ports with YaST
  1. Open YaST › Security and Users › Firewall and switch to the Allowed Services tab.

  2. Select a zone at Allow Services for Selected Zone in which to open the port. It is not possible to open a port for several zones at once.

  3. Select a service from Service to Allow and choose Add to add it to the list of Allowed Services. The port this service uses will be unblocked.

    In case your service is not listed, you need to manually specify the port(s) to unblock. Choose Advanced to open a dialog where you can specify TCP, UPD, RPC ports and IP protocols. Refer to the help section in this dialog for details.

  4. Choose Next to display a summary of your changes. Modify them by choosing Back or apply them by choosing Finish.

15.4.2 Configuring Manually

The following paragraphs provide step-by-step instructions for a successful configuration. Each configuration item is marked whether it is relevant to firewalling or masquerading. Use port range (for example, 500:510) whenever appropriate. Aspects related to the DMZ (demilitarized zone) as mentioned in the configuration file are not covered here. They are applicable only to a more complex network infrastructure found in larger organizations (corporate networks), which require extensive configuration and in-depth knowledge about the subject.

To enable SuSEFirewall2, use sudo systemctl enable SuSEfirewall2 or use the YaST module Services Manager.

FW_DEV_EXT (firewall, masquerading)

The device linked to the Internet. For a modem connection, enter ppp0. DSL connections use dsl0. Specify auto to use the interface that corresponds to the default route.

FW_DEV_INT (firewall, masquerading)

The device linked to the internal, private network (such as eth0). Leave this blank if there is no internal network and the firewall protects only the host on which it runs.

FW_ROUTE (firewall, masquerading)

If you need the masquerading function, set this to yes. Your internal hosts will not be visible to the outside, because their private network addresses (for example 192.168.x.x) are ignored by Internet routers.

For a firewall without masquerading, set this to yes if you want to allow access to the internal network. Your internal hosts need to use officially registered IP addresses in this case. Normally, however, you should not allow access to your internal network from the outside.

FW_MASQUERADE (masquerading)

Set this to yes if you need the masquerading function. This provides a virtually direct connection to the Internet for the internal hosts. It is more secure to have a proxy server between the hosts of the internal network and the Internet. Masquerading is not needed for services that a proxy server provides.

FW_MASQ_NETS (masquerading)

Specify the hosts or networks to masquerade, leaving a space between the individual entries. For example:

FW_MASQ_NETS="192.168.0.0/24 192.168.10.1"
FW_PROTECT_FROM_INT (firewall)

Set this to yes to protect your firewall host from attacks originating in your internal network. Services are only available to the internal network if explicitly enabled. Also see FW_SERVICES_INT_TCP and FW_SERVICES_INT_UDP.

FW_SERVICES_EXT_TCP (firewall)

Enter the TCP ports that should be made available. Leave this blank for a normal workstation at home that should not offer any services.

FW_SERVICES_EXT_UDP (firewall)

Leave this blank unless you run a UDP service and want to make it available to the outside. The services that use UDP include DNS servers, IPsec, TFTP, DHCP and others. In that case, enter the UDP ports to use.

FW_SERVICES_ACCEPT_EXT (firewall)

List services to allow from the Internet. This is a more generic form of the FW_SERVICES_EXT_TCP and FW_SERVICES_EXT_UDP settings, and more specific than FW_TRUSTED_NETS. The notation is a space-separated list of NET,PROTOCOL[,DPORT][,SPORT], for example 0/0,tcp,22 or 0/0,tcp,22,,hitcount=3,blockseconds=60,recentname=ssh, which means: allow a maximum of three SSH connects per minute from one IP address.

FW_SERVICES_INT_TCP (firewall)

With this variable, define the services available for the internal network. The notation is the same as for FW_SERVICES_EXT_TCP, but the settings are applied to the internal network. The variable only needs to be set if FW_PROTECT_FROM_INT is set to yes.

FW_SERVICES_INT_UDP (firewall)

See FW_SERVICES_INT_TCP.

FW_SERVICES_ACCEPT_INT (firewall)

List services to allow from internal hosts. See FW_SERVICES_ACCEPT_EXT.

FW_SERVICES_ACCEPT_RELATED_* (firewall)

This is how the SuSEFirewall2 implementation considers packets RELATED by netfilter.

For example, to allow finer grained filtering of Samba broadcast packets, RELATED packets are not accepted unconditionally. Variables starting with FW_SERVICES_ACCEPT_RELATED_ allow restricting RELATED packets handling to certain networks, protocols and ports.

This means that adding connection tracking modules (conntrack modules) to FW_LOAD_MODULES does not automatically result in accepting the packets tagged by those modules. Additionally, you must set variables starting with FW_SERVICES_ACCEPT_RELATED_ to a suitable value.

FW_CUSTOMRULES (firewall)

Uncomment this variable to install custom rules. Find examples in /etc/sysconfig/scripts/SuSEfirewall2-custom.

After configuring the firewall, test your setup. The firewall rule sets are created by entering systemctl start SuSEfirewall2 as root. Then use telnet, for example, from an external host to see whether the connection is actually denied. After that, review the output of journalctl (see Chapter 16, journalctl: Query the systemd Journal), where you should see something like this:

Mar 15 13:21:38 linux kernel: SFW2-INext-DROP-DEFLT IN=eth0
OUT= MAC=00:80:c8:94:c3:e7:00:a0:c9:4d:27:56:08:00 SRC=192.168.10.0
DST=192.168.10.1 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=15330 DF PROTO=TCP
SPT=48091 DPT=23 WINDOW=5840 RES=0x00 SYN URGP=0
OPT (020405B40402080A061AFEBC0000000001030300)

Other packages to test your firewall setup are Nmap (portscanner) or OpenVAS (Open Vulnerability Assessment System). The documentation of Nmap is found at /usr/share/doc/packages/nmap after installing the package and the documentation of openVAS resides at http://www.openvas.org.

15.5 For More Information

The most up-to-date information and other documentation about the SuSEFirewall2 package is found in /usr/share/doc/packages/SuSEfirewall2. The home page of the netfilter and iptables project, http://www.netfilter.org, provides a large collection of documents in many languages.

16 Configuring a VPN Server

  • Filename: security_vpnserver.xml
  • ID: cha.security.vpnserver
Abstract

Today, Internet connections are cheap and available almost everywhere. However, not all connections are secure. Using a Virtual Private Network (VPN), you can create a secure network within an insecure network such as the Internet or Wi-Fi. It can be implemented in different ways and serves several purposes. In this chapter, we focus on the OpenVPN implementation to link branch offices via secure wide area networks (WANs).

16.1 Conceptual Overview

This section defines some terms regarding VPN and gives a brief overview of some scenarios.

16.1.1 Terminology

Endpoint

The two ends of a tunnel, the source or destination client.

Tap Device

A tap device simulates an Ethernet device (layer 2 packets in the OSI model, such as IP packets). A tap device is used for creating a network bridge. It works with Ethernet frames.

Tun Device

A tun device simulates a point-to-point network (layer 3 packets in the OSI model, such as Ethernet frames). A tun device is used with routing and works with IP frames.

Tunnel

Linking two locations through a primarily public network. From a more technical viewpoint, it is a connection between the client's device and the server's device. Usually a tunnel is encrypted, but it does need to be by definition.

16.1.2 VPN Scenarios

Whenever you set up a VPN connection, your IP packets are transferred over a secured tunnel. A tunnel can use either a tun or tap device. They are virtual network kernel drivers which implement the transmission of Ethernet frames or IP frames/packets.

Any user space program, such as OpenVPN, can attach itself to a tun or tap device to receive packets sent by your operating system. The program is also able to write packets to the device.

There are many solutions to set up and build a VPN connection. This section focuses on the OpenVPN package. Compared to other VPN software, OpenVPN can be operated in two modes:

Routed VPN

Routing is an easy solution to set up. It is more efficient and scales better than a bridged VPN. Furthermore, it allows the user to tune MTU (Maximum Transfer Unit) to raise efficiency. However, in a heterogeneous environment, if you do not have a Samba server on the gateway, NetBIOS broadcasts do not work. If you need IPv6, the drivers for the tun devices on both ends must support this protocol explicitly. This scenario is depicted in Figure 16.1, “Routed VPN”.

Routed VPN
Figure 16.1: Routed VPN
Bridged VPN

Bridging is a more complex solution. It is recommended when you need to browse Windows file shares across the VPN without setting up a Samba or WINS server. Bridged VPN is also needed to use non-IP protocols (such as IPX) or applications relying on network broadcasts. However, it is less efficient than routed VPN. Another disadvantage is that it does not scale well. This scenario is depicted in the following figures.

Bridged VPN - Scenario 1
Figure 16.2: Bridged VPN - Scenario 1
Bridged VPN - Scenario 2
Figure 16.3: Bridged VPN - Scenario 2
Bridged VPN - Scenario 3
Figure 16.4: Bridged VPN - Scenario 3

The major difference between bridging and routing is that a routed VPN cannot IP-broadcast while a bridged VPN can.

16.2 Setting Up a Simple Test Scenario

In the following example, we will create a point-to-point VPN tunnel. The example demonstrates how to create a VPN tunnel between one client and a server. It is assumed that your VPN server will use private IP addresses like IP_OF_SERVER and your client will use the IP address IP_OF_CLIENT. Make sure you select addresses which do not conflict with other IP addresses.

Warning
Warning: Use Only for Testing

This following scenario is provided as an example meant for familiarizing yourself with VPN technology. Do not use this as a real world scenario, as it can compromise the security and safety of your IT infrastructure!

Tip
Tip: Names for Configuration File

To simplify working with OpenVPN configuration files, we recommend the following:

  • Place your OpenVPN configuration files in the directory /etc/openvpn.

  • Name your configuration files MY_CONFIGURATION.conf.

  • If there are multiple files that belong to the same configuration, place them in a subdirectory like /etc/openvpn/MY_CONFIGURATION.

16.2.1 Configuring the VPN Server

To configure a VPN server, proceed as follows:

Procedure 16.1: VPN Server Configuration
  1. Install the package openvpn on the machine that will later become your VPN server.

  2. Open a shell, become root and create the VPN secret key:

    root # openvpn --genkey --secret /etc/openvpn/secret.key
  3. Copy the secret key to your client:

    root # scp /etc/openvpn/secret.key root@IP_OF_CLIENT:/etc/openvpn/
  4. Create the file /etc/openvpn/server.conf with the following content:

    dev tun
    ifconfig IP_OF_SERVER IP_OF_CLIENT
    secret secret.key
  5. Set up a tun device configuration by creating a file called /etc/sysconfig/network/ifcfg-tun0 with the following content:

    STARTMODE='manual'
    BOOTPROTO='static'
    TUNNEL='tun'
    TUNNEL_SET_OWNER='nobody'
    TUNNEL_SET_GROUP='nobody'
    LINK_REQUIRED=no
    PRE_UP_SCRIPT='systemd:openvpn@server'
    PRE_DOWN_SCRIPT='systemd:openvpn@service'

    The notation openvpn@server points to the OpenVPN server configuration file located at /etc/openvpn/server.conf. For more information, see /usr/share/doc/packages/openvpn/README.SUSE.

  6. If you use a firewall, start YaST and open UDP port 1194 (Security and Users › Firewall › Allowed Services).

  7. Start the OpenVPN server service by setting the tun device to up:

    tux > sudo wicked ifup tun0

    You should see the confirmation:

    tun0            up

16.2.2 Configuring the VPN Clients

To configure the VPN client, do the following:

Procedure 16.2: VPN Client Configuration
  1. Install the package openvpn on your client VPN machine.

  2. Create /etc/openvpn/client.conf with the following content:

    remote DOMAIN_OR_PUBLIC_IP_OF_SERVER
    dev tun
    ifconfig IP_OF_CLIENT IP_OF_SERVER
    secret secret.key

    Replace the placeholder IP_OF_CLIENT in the first line with either the domain name, or the public IP address of your server.

  3. Set up a tun device configuration by creating a file called /etc/sysconfig/network/ifcfg-tun0 with the following content:

    STARTMODE='manual'
    BOOTPROTO='static'
    TUNNEL='tun'
    TUNNEL_SET_OWNER='nobody'
    TUNNEL_SET_GROUP='nobody'
    LINK_REQUIRED=no
    PRE_UP_SCRIPT='systemd:openvpn@client'
    PRE_DOWN_SCRIPT='systemd:openvpn@client'
  4. If you use a firewall, start YaST and open UDP port 1194 as described in Step 6 of Procedure 16.1, “VPN Server Configuration”.

  5. Start the OpenVPN server service by setting the tun device to up:

    tux > sudo wicked ifup tun0

    You should see the confirmation:

    tun0            up

16.2.3 Testing the VPN Example Scenario

After OpenVPN has successfully started, test the availability of the tun device with the following command:

ip addr show tun0

To verify the VPN connection, use ping on both client and server side to see if they can reach each other. Ping the server from the client:

ping -I tun0 IP_OF_SERVER

Ping the client from the server:

ping -I tun0 IP_OF_CLIENT

16.3 Setting Up Your VPN Server Using a Certificate Authority

The example in Section 16.2 is useful for testing, but not for daily work. This section explains how to build a VPN server that allows more than one connection at the same time. This is done with a public key infrastructure (PKI). A PKI consists of a pair of public and private keys for the server and each client, and a master certificate authority (CA), which is used to sign every server and client certificate.

This setup involves the following basic steps:

16.3.1 Creating Certificates

Before a VPN connection can be established, the client must authenticate the server certificate. Conversely, the server must also authenticate the client certificate. This is called mutual authentication. To create such certificates, use the YaST CA module. See Chapter 17, Managing X.509 Certification for more details.

To create a VPN root, server, and client CA, proceed as follows:

Procedure 16.3: Creating a VPN Server Certificate
  1. Prepare a common VPN Certificate Authority (CA):

    1. Start the YaST CA module.

    2. Click Create Root CA.

    3. Enter a CA Name and a Common Name, for example VPN-Server-CA.

    4. Fill out the other boxes like e-mail addresses, organization, etc. and proceed with Next.

    5. Enter your password twice and proceed with Next.

    6. Review the summary. YaST displays the current settings for confirmation. Click Create. The root CA is created and displayed in the overview.

  2. Create a VPN server certificate:

    1. Select the root CA you created in Step 1 and click Enter CA.

    2. When prompted, enter the CA Password.

    3. Click the Certificate tab and click Add › Add Server Certificate.

    4. Specify a Common Name, for example, openvpn.example.com and proceed with Next.

    5. Specify your password and confirm it. Then click Advanced options.

      Switch to the Advanced Settings › Key Usage list and check one of the following sets:

      • digitalSignature and keyEncipherment, or,

      • digitalSignature and keyAgreement

      Switch to the Advanced Settings › extendedKeyUsage and type serverAuth for a server certificate.

      Important
      Important: Avoiding Man-in-the-Middle Attacks

      If you are using the method remote-cert-tls server or remote-cert-tls client to verify certificates, limit the number of times a key can be used. This mitigates man-in-the-middle attacks.

      For more information, see http://openvpn.net/index.php/open-source/documentation/howto.html#mitm.

      Finish with Ok and proceed with Next.

    6. Review the summary. YaST displays the current settings for confirmation. Click Create. When the VPN server certificate is created, it is displayed in the Certificates tab.

  3. Create VPN client certificates:

    1. Make sure you are on the Certificates tab.

    2. Click Add › Add Client Certificate.

    3. Enter a Common Name, for example, client1.example.com.

    4. Enter the e-mail addresses for your client, for example, user1@client1.example.com, and click Add. Proceed with Next.

    5. Enter your password twice and click Advanced options.

      Switch to Advanced Settings › Key Usage list and check one of the following flags:

      • digitalSignature or,

      • keyAgreement or,

      • digitalSignature and keyAgreement.

      Switch to the Advanced Settings › extendedKeyUsage and type clientAuth for a server certificate.

    6. Review the summary. YaST displays the current settings for confirmation. Click Create. The VPN client certificate is created and is displayed in the Certificates tab.

    7. If you need certificates for more clients, repeat Step 3.

After you have successfully finished Procedure 16.3, “Creating a VPN Server Certificate” you have a VPN root CA, a VPN server CA, and one or more VPN client CAs. To finish the task, proceed with the following procedure:

  1. Choose the Certificates tab.

  2. Export the VPN server certificate in two formats: PEM and unencrypted key in PEM.

    1. Select your VPN server certificate (openvpn.example.com in our example) and choose Export › Export to File.

    2. Select Only the Certificate in PEM Format, enter your VPN server certificate password and save the file to /etc/openvpn/server_crt.pem.

    3. Repeat Step 2.a and Step 2.b, but choose the format Only the Key Unencrypted in PEM Format. Save the file to /etc/openvpn/server_key.pem.

  3. Export the VPN client certificates and choose an export format, PEM or PKCS12 (preferred). For each client:

    1. Select your VPN client certificate (client1.example.com in our example) and choose Export › Export to File.

    2. Select Like PKCS12 and Include the CA Chain, enter your VPN client certificate key password and provide a PKCS12 password. Enter a File Name, click Browse and save the file to /etc/openvpn/client1.p12.

  4. Copy the files to your client (in our example, client1.example.com).

  5. Export the VPN CA (in our example VPN-Server-CA):

    1. Switch to the Description tab.

    2. Select Advanced › Export to File.

    3. Mark Only the Certificate in PEM Format and save the file to /etc/openvpn/vpn_ca.pem.

If desired, the client PKCS12 file can be converted into the PEM format using this command:

openssl pkcs12 -in client1.p12 -out client1.pem

Enter your client password to create the client1.pem file. The PEM file contains the client certificate, client key, and the CA certificate. You can split this combined file using a text editor and create three separate files. The file names can be used for the ca, cert, and key options in the OpenVPN configuration file (see Example 16.1, “VPN Server Configuration File”).

16.3.2 Configuring the VPN Server

As the basis of your configuration file, copy /usr/share/doc/packages/openvpn/sample-config-files/server.conf to /etc/openvpn/. Then customize it to your needs.

Example 16.1: VPN Server Configuration File
# /etc/openvpn/server.conf
port 1194 1
proto udp 2
dev tun0 3

# Security 4

ca    vpn_ca.pem
cert  server_crt.pem
key   server_key.pem

# ns-cert-type server 
remote-cert-tls client 5
dh   server/dh2048.pem 6

server 192.168.1.0 255.255.255.0 7
ifconfig-pool-persist /var/run/openvpn/ipp.txt 8

# Privileges 9
user nobody
group nobody

# Other configuration 10
keepalive 10 120
comp-lzo
persist-key
persist-tun
# status      /var/log/openvpn-status.tun0.log 11
# log-append  /var/log/openvpn-server.log 12
verb 4

1

The TCP/UDP port on which OpenVPN listens. You need to open the port in the firewall, see Chapter 15, Masquerading and Firewalls. The standard port for VPN is 1194, so you can usually leave that as it is.

2

The protocol, either UDP or TCP.

3

The tun or tap device. For the difference between these, see Section 16.1.1, “Terminology”.

4

The following lines contain the relative or absolute path to the root server CA certificate (ca), the root CA key (cert), and the private server key (key). These were generated in Section 16.3.1, “Creating Certificates”.

5

Require that peer certificates have been signed with an explicit key usage and extended key usage based on RFC3280 TLS rules. There is a description of how to make a server use this explicit key in Procedure 16.3, “Creating a VPN Server Certificate”.

6

The Diffie-Hellman parameters. Create the required file with the following command:

openssl dhparam -out /etc/openvpn/dh2048.pem 2048

7

Supplies a VPN subnet. The server can be reached by 192.168.1.1.

8

Records a mapping of clients and its virtual IP address in the given file. Useful when the server goes down and (after the restart) the clients get their previously assigned IP address.

9

For security reasons, run the OpenVPN daemon with reduced privileges. To do so, specify that it should use the group and user nobody.

10

Several other configuration options—see the comment in the example configuration file: /usr/share/doc/packages/openvpn/sample-config-files.

11

Enable this option to write short status updates with statistical data (operational status dump) to the named file. By default, this is not enabled.

All output is written to syslog. If you have more than one configuration file (for example, one for home and another for work), it is recommended to include the device name into the file name. This avoids overwriting output files accidentally. In this case, it is tun0, taken from the dev directive—see 3.

12

By default, log messages go to syslog. Overwrite this behavior by removing the hash character. In that case, all messages go to /var/log/openvpn-server.log. Do not forget to configure a logrotate service. See man 8 logrotate for further details.

After having completed this configuration, you can see log messages of your OpenVPN server under /var/log/openvpn.log. After having started it for the first time, it should finish with:

... Initialization Sequence Completed

If you do not see this message, check the log carefully for any hints of what is wrong in your configuration file.

16.3.3 Configuring the VPN Clients

As the basis of your configuration file, copy /usr/share/doc/packages/openvpn/sample-config-files/client.conf to /etc/openvpn/. Then customize it to your needs.

Example 16.2: VPN Client Configuration File
# /etc/openvpn/client.conf
client 1
dev tun 2
proto udp 3
remote IP_OR_HOST_NAME 1194 4
resolv-retry infinite
nobind

remote-cert-tls server 5

# Privileges 6
user nobody
group nobody

# Try to preserve some state across restarts.
persist-key
persist-tun

# Security 7
pkcs12 client1.p12

comp-lzo 8

1

Specifies that this machine is a client.

2

The network device. Both clients and server must use the same device.

3

The protocol. Use the same settings as on the server.

5

This is security option for clients which ensures that the host they connect to is a designated server.

4

Replace the placeholder IP_OR_HOST_NAME with the respective host name or IP address of your VPN server. After the host name, the port of the server is given. You can have multiple lines of remote entries pointing to different VPN servers. This is useful for load balancing between different VPN servers.

6

For security reasons, run the OpenVPN daemon with reduced privileges. To do so, specify that it should use the group and user nobody.

7

Contains the client files. For security reasons, use a separate pair of files for each client.

8

Turn on compression. Only use this parameter if compression is enabled on the server as well.

16.4 For More Information

For more information on setting up a VPN connection using NetworkManager, see Section 30.3.5, “NetworkManager and VPN”.

For more information about VPN in general, see:

  • http://www.openvpn.net: the OpenVPN home page

  • man openvpn

  • /usr/share/doc/packages/openvpn/sample-config-files/: example configuration files for different scenarios.

  • /usr/src/linux/Documentation/networking/tuntap.txt, to install the kernel-source package.

17 Managing X.509 Certification

  • Filename: security_yast2_ca.xml
  • ID: cha.security.yast_ca
Abstract

An increasing number of authentication mechanisms are based on cryptographic procedures. Digital certificates that assign cryptographic keys to their owners play an important role in this context. These certificates are used for communication and can also be found, for example, on company ID cards. The generation and administration of certificates is mostly handled by official institutions that offer this as a commercial service. In some cases, however, it may make sense to carry out these tasks yourself. For example, if a company does not want to pass personal data to third parties.

YaST provides two modules for certification, which offer basic management functions for digital X.509 certificates. The following sections explain the basics of digital certification and how to use YaST to create and administer certificates of this type.

17.1 The Principles of Digital Certification

Digital certification uses cryptographic processes to encrypt and protect data from access by unauthorized people. The user data is encrypted using a second data record, or key. The key is applied to the user data in a mathematical process, producing an altered data record in which the original content can no longer be identified. Asymmetrical encryption is now in general use (public key method). Keys always occur in pairs:

Private Key

The private key must be kept safely by the key owner. Accidental publication of the private key compromises the key pair and renders it useless.

Public Key

The key owner circulates the public key for use by third parties.

17.1.1 Key Authenticity

Because the public key process is in widespread use, there are many public keys in circulation. Successful use of this system requires that every user be sure that a public key actually belongs to the assumed owner. The assignment of users to public keys is confirmed by trustworthy organizations with public key certificates. Such certificates contain the name of the key owner, the corresponding public key, and the electronic signature of the person issuing the certificate.

Trustworthy organizations that issue and sign public key certificates are usually part of a certification infrastructure. This is responsible for the other aspects of certificate management, such as publication, withdrawal, and renewal of certificates. An infrastructure of this kind is generally called a public key infrastructure or PKI. One familiar PKI is the OpenPGP standard in which users publish their certificates themselves without central authorization points. These certificates become trustworthy when signed by other parties in the web of trust.

The X.509 Public Key Infrastructure (PKIX) is an alternative model defined by the IETF (Internet Engineering Task Force) that serves as a model for almost all publicly-used PKIs today. In this model, authentication is made by certificate authorities (CA) in a hierarchical tree structure. The root of the tree is the root CA, which certifies all sub-CAs. The lowest level of sub-CAs issue user certificates. The user certificates are trustworthy by certification that can be traced to the root CA.

The security of such a PKI depends on the trustworthiness of the CA certificates. To make certification practices clear to PKI customers, the PKI operator defines a certification practice statement (CPS) that defines the procedures for certificate management. This should ensure that the PKI only issues trustworthy certificates.

17.1.2 X.509 Certificates

An X.509 certificate is a data structure with several fixed fields and, optionally, additional extensions. The fixed fields mainly contain the name of the key owner, the public key, and the data relating to the issuing CA (name and signature). For security reasons, a certificate should only have a limited period of validity, so a field is also provided for this date. The CA guarantees the validity of the certificate in the specified period. The CPS usually requires the PKI (the issuing CA) to create and distribute a new certificate before expiration.

The extensions can contain any additional information. An application is only required to be able to evaluate an extension if it is identified as critical. If an application does not recognize a critical extension, it must reject the certificate. Some extensions are only useful for a specific application, such as signature or encryption.

Table 17.1 shows the fields of a basic X.509 certificate in version 3.

Table 17.1: X.509v3 Certificate

Field

Content

Version

The version of the certificate, for example, v3

Serial Number

Unique certificate ID (an integer)

Signature

The ID of the algorithm used to sign the certificate

Issuer

Unique name (DN) of the issuing authority (CA)

Validity

Period of validity

Subject

Unique name (DN) of the owner

Subject Public Key Info

Public key of the owner and the ID of the algorithm

Issuer Unique ID

Unique ID of the issuing CA (optional)

Subject Unique ID

Unique ID of the owner (optional)

Extensions

Optional additional information, such as KeyUsage or BasicConstraints

17.1.3 Blocking X.509 Certificates

If a certificate becomes untrustworthy before it has expired, it must be blocked immediately. This can become necessary if, for example, the private key has accidentally been made public. Blocking certificates is especially important if the private key belongs to a CA rather than a user certificate. In this case, all user certificates issued by the relevant CA must be blocked immediately. If a certificate is blocked, the PKI (the responsible CA) must make this information available to all those involved using a certificate revocation list (CRL).

These lists are supplied by the CA to public CRL distribution points (CDPs) at regular intervals. The CDP can optionally be named as an extension in the certificate, so a checker can fetch a current CRL for validation purposes. One way to do this is the online certificate status protocol (OCSP). The authenticity of the CRLs is ensured with the signature of the issuing CA. Table 17.2 shows the basic parts of a X.509 CRL.

Table 17.2: X.509 Certificate Revocation List (CRL)

Field

Content

Version

The version of the CRL, such as v2

Signature

The ID of the algorithm used to sign the CRL

Issuer

Unique name (DN) of the publisher of the CRL (usually the issuing CA)

This Update

Time of publication (date, time) of this CRL

Next Update

Time of publication (date, time) of the next CRL

List of revoked certificates

Every entry contains the serial number of the certificate, the time of revocation, and optional extensions (CRL entry extensions)

Extensions

Optional CRL extensions

17.1.4 Repository for Certificates and CRLs

The certificates and CRLs for a CA must be made publicly accessible using a repository. Because the signature protects the certificates and CRLs from being forged, the repository itself does not need to be secured in a special way. Instead, it tries to grant the simplest and fastest access possible. For this reason, certificates are often provided on an LDAP or HTTP server. Find explanations about LDAP in Chapter 5, LDAP—A Directory Service. contains information about the HTTP server.

17.1.5 Proprietary PKI

YaST contains modules for the basic management of X.509 certificates. This mainly involves the creation of CAs, sub-CAs, and their certificates. The services of a PKI go far beyond simply creating and distributing certificates and CRLs. The operation of a PKI requires a well-conceived administrative infrastructure allowing continuous update of certificates and CRLs. This infrastructure is provided by commercial PKI products and can also be partly automated. YaST provides tools for creating and distributing CAs and certificates, but cannot currently offer this background infrastructure. To set up a small PKI, you can use the available YaST modules. However, you should use commercial products to set up an official or commercial PKI.

17.2 YaST Modules for CA Management

YaST provides two modules for basic CA management. The primary management tasks with these modules are explained here.

17.2.1 Creating a Root CA

The first step when setting up a PKI is to create a root CA. Do the following:

  1. Start YaST and go to Security and Users › CA Management.

  2. Click Create Root CA.

  3. Enter the basic data for the CA in the first dialog, shown in Figure 17.1. The text boxes have the following meanings:

    YaST CA Module—Basic Data for a Root CA
    Figure 17.1: YaST CA Module—Basic Data for a Root CA
    CA Name

    Enter the technical name of the CA. Directory names, among other things, are derived from this name, which is why only the characters listed in the help can be used. The technical name is also displayed in the overview when the module is started.

    Common Name

    Enter the name for use in referring to the CA.

    E-Mail Addresses

    Several e-mail addresses can be entered that can be seen by the CA user. This can be helpful for inquiries.

    Country

    Select the country where the CA is operated.

    Organization, Organizational Unit, Locality, State

    Optional values

    Proceed with Next.

  4. Enter a password in the second dialog. This password is always required when using the CA—when creating a sub-CA or generating certificates. The text boxes have the following meaning:

    Key Length

    Key Length contains a meaningful default and does not generally need to be changed unless an application cannot deal with this key length. The higher the number the more secure your password is.

    Valid Period (days)

    The Valid Period in the case of a CA defaults to 3650 days (roughly ten years). This long period makes sense because the replacement of a deleted CA involves an enormous administrative effort.

    Clicking Advanced Options opens a dialog for setting different attributes from the X.509 extensions (Figure 17.4, “YaST CA Module—Extended Settings”). These values have rational default settings and should only be changed if you are really sure of what you are doing. Proceed with Next.

  5. Review the summary. YaST displays the current settings for confirmation. Click Create. The root CA is created then appears in the overview.

Tip
Tip

In general, it is best not to allow user certificates to be issued by the root CA. It is better to create at least one sub-CA and create the user certificates from there. This has the advantage that the root CA can be kept isolated and secure, for example, on an isolated computer on secure premises. This makes it very difficult to attack the root CA.

17.2.2 Changing Password

If you need to change your password for your CA, proceed as follows:

  1. Start YaST and open the CA module.

  2. Select the required root CA and click Enter CA.

  3. Enter the password if you entered a CA the first time. YaST displays the CA key information in the Description tab (see Figure 17.2).

  4. Click Advanced and select Change CA Password. A dialog opens.

  5. Enter the old and the new password.

  6. Finish with OK

17.2.3 Creating or Revoking a Sub-CA

A sub-CA is created in the same way as a root CA.

Note
Note

The validity period for a sub-CA must be fully within the validity period of the parent CA. A sub-CA is always created after the parent CA, therefore, the default value leads to an error message. To avoid this, enter a permissible value for the period of validity.

Do the following:

  1. Start YaST and open the CA module.

  2. Select the required root CA and click Enter CA.

  3. Enter the password if you are entering a CA for the first time. YaST displays the CA key information in the tab Description (see Figure 17.2).

    YaST CA Module—Using a CA
    Figure 17.2: YaST CA Module—Using a CA
  4. Click Advanced and select Create SubCA. This opens the same dialog as for creating a root CA.

  5. Proceed as described in Section 17.2.1, “Creating a Root CA”.

    It is possible to use one password for all your CAs. Enable Use CA Password as Certificate Password to give your sub-CAs the same password as your root CA. This helps to reduce the amount of passwords for your CAs.

    Note
    Note: Check your Valid Period

    Take into account that the valid period must be lower than the valid period in the root CA.

  6. Select the Certificates tab. Reset compromised or otherwise unwanted sub-CAs here, using Revoke. Revocation alone is not enough to deactivate a sub-CA. You must also publish revoked sub-CAs in a CRL. The creation of CRLs is described in Section 17.2.6, “Creating Certificate Revocation Lists (CRLs)”.

  7. Finish with OK.

17.2.4 Creating or Revoking User Certificates

Creating client and server certificates is very similar to creating CAs in Section 17.2.1, “Creating a Root CA”. The same principles apply here. In certificates intended for e-mail signature, the e-mail address of the sender (the private key owner) should be contained in the certificate to enable the e-mail program to assign the correct certificate.

For certificate assignment during encryption, it is necessary for the e-mail address of the recipient (the public key owner) to be included in the certificate. In the case of server and client certificates, the host name of the server must be entered in the Common Name field. The default validity period for certificates is 365 days.

To create client and server certificates, do the following:

  1. Start YaST and open the CA module.

  2. Select the required root CA and click Enter CA.

  3. Enter the password if you are entering a CA for the first time. YaST displays the CA key information in the Description tab.

  4. Click Certificates (see Figure 17.3).

    Certificates of a CA
    Figure 17.3: Certificates of a CA
  5. Click Add › Add Server Certificate and create a server certificate.

  6. Click Add › Add Client Certificate and create a client certificate. Do not forget to enter an e-mail address.

  7. Finish with OK

To revoke compromised or otherwise unwanted certificates, do the following:

  1. Start YaST and open the CA module.

  2. Select the required root CA and click Enter CA.

  3. Enter the password if you are entering a CA for the first time. YaST displays the CA key information in the Description tab.

  4. Click Certificates (see Section 17.2.3, “Creating or Revoking a Sub-CA”).

  5. Select the certificate to revoke and click Revoke.

  6. Choose a reason to revoke this certificate.

  7. Finish with OK.

Note
Note

Revocation alone is not enough to deactivate a certificate. Also publish revoked certificates in a CRL. Section 17.2.6, “Creating Certificate Revocation Lists (CRLs)” explains how to create CRLs. Revoked certificates can be completely removed after publication in a CRL with Delete.

17.2.5 Changing Default Values

The previous sections explained how to create sub-CAs, client certificates, and server certificates. Special settings are used in the extensions of the X.509 certificate. These settings have been given rational defaults for every certificate type and do not normally need to be changed. However, it may be that you have special requirements for these extensions. In this case, it may make sense to adjust the defaults. Otherwise, start from scratch every time you create a certificate.

  1. Start YaST and open the CA module.

  2. Enter the required root CA, as described in Section 17.2.3, “Creating or Revoking a Sub-CA”.

  3. Click Advanced › Edit Default.

  4. Choose type of certificate to change and proceed with Next.

  5. The dialog for changing the defaults as shown in Figure 17.4, “YaST CA Module—Extended Settings” opens.

    YaST CA Module—Extended Settings
    Figure 17.4: YaST CA Module—Extended Settings
  6. Change the associated value on the right side and set or delete the critical setting with critical.

  7. Click Next to see a short summary.

  8. Finish your changes with Save.

Note
Note

All changes to the defaults only affect objects created after this point. Already-existing CAs and certificates remain unchanged.

17.2.6 Creating Certificate Revocation Lists (CRLs)

If compromised or otherwise unwanted certificates need to be excluded from further use, they must first be revoked. The procedure for this is explained in Section 17.2.3, “Creating or Revoking a Sub-CA” (for sub-CAs) and Section 17.2.4, “Creating or Revoking User Certificates” (for user certificates). After this, a CRL must be created and published with this information.

The system maintains only one CRL for each CA. To create or update this CRL, do the following:

  1. Start YaST and open the CA module.

  2. Enter the required CA, as described in Section 17.2.3, “Creating or Revoking a Sub-CA”.

  3. Click CRL. The dialog that opens displays a summary of the last CRL of this CA.

  4. Create a new CRL with Generate CRL if you have revoked new sub-CAs or certificates since its creation.

  5. Specify the period of validity for the new CRL (default: 30 days).

  6. Click OK to create and display the CRL. Afterward, you must publish this CRL.

Note
Note

Applications that evaluate CRLs reject every certificate if the CRL is not available or has expired. As a PKI provider, it is your duty always to create and publish a new CRL before the current CRL expires (period of validity). YaST does not provide a function for automating this procedure.

17.2.7 Exporting CA Objects to LDAP

The executing computer should be configured with the YaST LDAP client for LDAP export. This provides LDAP server information at runtime that can be used when completing dialog fields. Otherwise (although export may be possible), all LDAP data must be entered manually. You must always enter several passwords (see Table 17.3, “Passwords during LDAP Export”).

Table 17.3: Passwords during LDAP Export

Password

Meaning

LDAP Password

Authorizes the user to make entries in the LDAP tree.

Certificate Password

Authorizes the user to export the certificate.

New Certificate Password

The PKCS12 format is used during LDAP export. This format forces the assignment of a new password for the exported certificate.

Certificates, CAs, and CRLs can be exported to LDAP.

Exporting a CA to LDAP

To export a CA, enter the CA as described in Section 17.2.3, “Creating or Revoking a Sub-CA”. Select Extended › Export to LDAP in the subsequent dialog, which opens the dialog for entering LDAP data. If your system has been configured with the YaST LDAP client, the fields are already partly completed. Otherwise, enter all the data manually. Entries are made in LDAP in a separate tree with the attribute caCertificate.

Exporting a Certificate to LDAP

Enter the CA containing the certificate to export then select Certificates. Select the required certificate from the certificate list in the upper part of the dialog and select Export › Export to LDAP. The LDAP data is entered here in the same way as for CAs. The certificate is saved with the corresponding user object in the LDAP tree with the attributes userCertificate (PEM format) and userPKCS12 (PKCS12 format).

Exporting a CRL to LDAP

Enter the CA containing the CRL to export and select CRL. If desired, create a new CRL and click Export. The dialog that opens displays the export parameters. You can export the CRL for this CA either once or in periodical time intervals. Activate the export by selecting Export to LDAP and enter the respective LDAP data. To do this at regular intervals, select the Repeated Recreation and Export radio button and change the interval, if appropriate.

17.2.8 Exporting CA Objects as a File

If you have set up a repository on the computer for administering CAs, you can use this option to create the CA objects directly as a file at the correct location. Different output formats are available, such as PEM, DER, and PKCS12. In the case of PEM, it is also possible to choose whether a certificate should be exported with or without key and whether the key should be encrypted. In the case of PKCS12, it is also possible to export the certification path.

Export a file in the same way for certificates, CAs as with LDAP, described in Section 17.2.7, “Exporting CA Objects to LDAP”, except you should select Export as File instead of Export to LDAP. This then takes you to a dialog for selecting the required output format and entering the password and file name. The certificate is stored at the required location after clicking OK.

For CRLs click Export, select Export to file, choose the export format (PEM or DER) and enter the path. Proceed with OK to save it to the respective location.

Tip
Tip

You can select any storage location in the file system. This option can also be used to save CA objects on a transport medium, such as a flash disk. The /media directory generally holds any type of drive except the hard disk of your system.

17.2.9 Importing Common Server Certificates

If you have exported a server certificate with YaST to your media on an isolated CA management computer, you can import this certificate on a server as a common server certificate. Do this during installation or at a later point with YaST.

Note
Note

You need one of the PKCS12 formats to import your certificate successfully.

The general server certificate is stored in /etc/ssl/servercerts and can be used there by any CA-supported service. When this certificate expires, it can easily be replaced using the same mechanisms. To get things functioning with the replaced certificate, restart the participating services.

Tip
Tip

If you select Import here, you can select the source in the file system. This option can also be used to import certificates from removable media, such as a flash disk.

To import a common server certificate, do the following:

  1. Start YaST and open Common Server Certificate under Security and Users

  2. View the data for the current certificate in the description field after YaST has been started.

  3. Select Import and the certificate file.

  4. Enter the password and click Next. The certificate is imported then displayed in the description field.

  5. Close YaST with Finish.

Part IV Confining Privileges with AppArmor

18 Introducing AppArmor

Many security vulnerabilities result from bugs in trusted programs. A trusted program runs with privileges that attackers want to possess. The program fails to keep that trust if there is a bug in the program that allows the attacker to acquire said privilege.

19 Getting Started

Prepare a successful deployment of AppArmor on your system by carefully considering the following items:

20 Immunizing Programs

Effective hardening of a computer system requires minimizing the number of programs that mediate privilege, then securing the programs as much as possible. With AppArmor, you only need to profile the programs that are exposed to attack in your environment, which drastically reduces the amount of wor…

21 Profile Components and Syntax

Building AppArmor profiles to confine an application is very straightforward and intuitive. AppArmor ships with several tools that assist in profile creation. It does not require you to do any programming or script handling. The only task that is required of the administrator is to determine a polic…

22 AppArmor Profile Repositories

AppArmor ships with a set of profiles enabled by default. These are created by the AppArmor developers, and are stored in /etc/apparmor.d. In addition to these profiles, SUSE Linux Enterprise Desktop ships profiles for individual applications together with the relevant application. These profiles ar…

23 Building and Managing Profiles with YaST

YaST provides a basic way to build profiles and manage AppArmor® profiles. It provides two interfaces: a graphical one and a text-based one. The text-based interface consumes less resources and bandwidth, making it a better choice for remote administration, or for times when a local graphical enviro…

24 Building Profiles from the Command Line

AppArmor® provides the user the ability to use a command line interface rather than a graphical interface to manage and configure the system security. Track the status of AppArmor and create, delete, or modify AppArmor profiles using the AppArmor command line tools.

25 Profiling Your Web Applications Using ChangeHat

An AppArmor® profile represents the security policy for an individual program instance or process. It applies to an executable program, but if a portion of the program needs different access permissions than other portions, the program can “change hats” to use a different security context, distincti…

26 Confining Users with pam_apparmor

An AppArmor profile applies to an executable program; if a portion of the program needs different access permissions than other portions need, the program can change hats via change_hat to a different role, also known as a subprofile. The pam_apparmor PAM module allows applications to confine authen…

27 Managing Profiled Applications

After creating profiles and immunizing your applications, SUSE® Linux Enterprise Desktop becomes more efficient and better protected as long as you perform AppArmor® profile maintenance (which involves analyzing log files, refining your profiles, backing up your set of profiles and keeping it up-to-…

28 Support

This chapter outlines maintenance-related tasks. Learn how to update AppArmor® and get a list of available man pages providing basic help for using the command line tools provided by AppArmor. Use the troubleshooting section to learn about some common problems encountered with AppArmor and their sol…

29 AppArmor Glossary

18 Introducing AppArmor

  • Filename: apparmor_intro.xml
  • ID: cha.apparmor.intro

Many security vulnerabilities result from bugs in trusted programs. A trusted program runs with privileges that attackers want to possess. The program fails to keep that trust if there is a bug in the program that allows the attacker to acquire said privilege.

AppArmor® is an application security solution designed specifically to apply privilege confinement to suspect programs. AppArmor allows the administrator to specify the domain of activities the program can perform by developing a security profile. A security profile is a listing of files that the program may access and the operations the program may perform. AppArmor secures applications by enforcing good application behavior without relying on attack signatures, so it can prevent attacks even if previously unknown vulnerabilities are being exploited.

18.1 AppArmor Components

AppArmor consists of:

  • A library of AppArmor profiles for common Linux* applications, describing what files the program needs to access.

  • A library of AppArmor profile foundation classes (profile building blocks) needed for common application activities, such as DNS lookup and user authentication.

  • A tool suite for developing and enhancing AppArmor profiles, so that you can change the existing profiles to suit your needs and create new profiles for your own local and custom applications.

  • Several specially modified applications that are AppArmor enabled to provide enhanced security in the form of unique subprocess confinement (including Apache).

  • The AppArmor-related kernel code and associated control scripts to enforce AppArmor policies on your SUSE® Linux Enterprise Desktop system.

18.2 Background Information on AppArmor Profiling

For more information about the science and security of AppArmor, refer to the following papers:

SubDomain: Parsimonious Server Security by Crispin Cowan, Steve Beattie, Greg Kroah-Hartman, Calton Pu, Perry Wagle, and Virgil Gligor

Describes the initial design and implementation of AppArmor. Published in the proceedings of the USENIX LISA Conference, December 2000, New Orleans, LA. This paper is now out of date, describing syntax and features that are different from the current AppArmor product. This paper should be used only for background, and not for technical documentation.

Defcon Capture the Flag: Defending Vulnerable Code from Intense Attack by Crispin Cowan, Seth Arnold, Steve Beattie, Chris Wright, and John Viega

A good guide to strategic and tactical use of AppArmor to solve severe security problems in a very short period of time. Published in the Proceedings of the DARPA Information Survivability Conference and Expo (DISCEX III), April 2003, Washington, DC.

AppArmor for Geeks by Seth Arnold

This document tries to convey a better understanding of the technical details of AppArmor. It is available at http://en.opensuse.org/SDB:AppArmor_geeks.

19 Getting Started

  • Filename: apparmor_start.xml
  • ID: cha.apparmor.start

Prepare a successful deployment of AppArmor on your system by carefully considering the following items:

  1. Determine the applications to profile. Read more on this in Section 19.3, “Choosing Applications to Profile”.

  2. Build the needed profiles as roughly outlined in Section 19.4, “Building and Modifying Profiles”. Check the results and adjust the profiles when necessary.

  3. Update your profiles whenever your environment changes or you need to react to security events logged by the reporting tool of AppArmor. Refer to Section 19.5, “Updating Your Profiles”.

19.1 Installing AppArmor

AppArmor is installed and running on any installation of SUSE® Linux Enterprise Desktop by default, regardless of what patterns are installed. The packages listed below are needed for a fully-functional instance of AppArmor:

  • apparmor-docs

  • apparmor-parser

  • apparmor-profiles

  • apparmor-utils

  • audit

  • libapparmor1

  • perl-libapparmor

  • yast2-apparmor

Tip
Tip

If AppArmor is not installed on your system, install the pattern apparmor for a complete AppArmor installation. Either use the YaST Software Management module for installation, or use Zypper on the command line:

zypper in -t pattern apparmor

19.2 Enabling and Disabling AppArmor

AppArmor is configured to run by default on any fresh installation of SUSE Linux Enterprise Desktop. There are two ways of toggling the status of AppArmor:

Using YaST Services Manager

Disable or enable AppArmor by removing or adding its boot script to the sequence of scripts executed on system boot. Status changes are applied on reboot.

Using AppArmor Configuration Window

Toggle the status of AppArmor in a running system by switching it off or on using the YaST AppArmor Control Panel. Changes made here are applied instantaneously. The Control Panel triggers a stop or start event for AppArmor and removes or adds its boot script in the system's boot sequence.

To disable AppArmor permanently (by removing it from the sequence of scripts executed on system boot) proceed as follows:

  1. Start YaST.

  2. Select System › Services Manager.

  3. Mark apparmor by clicking its row in the list of services, then click Enable/Disable in the lower part of the window. Check that Enabled changed to Disabled in the apparmor row.

  4. Confirm with OK.

AppArmor will not be initialized on reboot, and stays inactive until you re-enable it. Re-enabling a service using the YaST Services Manager tool is similar to disabling it.

Toggle the status of AppArmor in a running system by using the AppArmor Configuration window. These changes take effect when you apply them and survive a reboot of the system. To toggle the status of AppArmor, proceed as follows:

  1. Start YaST, select AppArmor Configuration, and click Settings in the main window.

  2. Enable AppArmor by checking Enable AppArmor or disable AppArmor by deselecting it.

  3. Click Done in the AppArmor Configuration window.

19.3 Choosing Applications to Profile

You only need to protect the programs that are exposed to attacks in your particular setup, so only use profiles for those applications you actually run. Use the following list to determine the most likely candidates:

Network Agents
Web Applications
Cron Jobs

To find out which processes are currently running with open network ports and might need a profile to confine them, run aa-unconfined as root.

Example 19.1: Output of aa-unconfined
19848 /usr/sbin/cupsd not confined
19887 /usr/sbin/sshd not confined
19947 /usr/lib/postfix/master not confined
1328 /usr/sbin/ntpd confined by '/usr/sbin/ntpd (enforce)'

Each of the processes in the above example labeled not confined might need a custom profile to confine it. Those labeled confined by are already protected by AppArmor.

Tip
Tip: For More Information

For more information about choosing the right applications to profile, refer to Section 20.2, “Determining Programs to Immunize”.

19.4 Building and Modifying Profiles

AppArmor on SUSE Linux Enterprise Desktop ships with a preconfigured set of profiles for the most important applications. In addition, you can use AppArmor to create your own profiles for any application you want.

There are two ways of managing profiles. One is to use the graphical front-end provided by the YaST AppArmor modules and the other is to use the command line tools provided by the AppArmor suite itself. The main difference is that YaST supports only basic functionality for AppArmor profiles, while the command line tools let you update/tune the profiles in a more fine-grained way.

For each application, perform the following steps to create a profile:

  1. As root, let AppArmor create a rough outline of the application's profile by running aa-genprof PROGRAM_NAME.

    or

    Outline the basic profile by running YaST › Security and Users › AppArmor Configuration › Manually Add Profile and specifying the complete path to the application you want to profile.

    A new basic profile is outlined and put into learning mode, which means that it logs any activity of the program you are executing, but does not yet restrict it.

  2. Run the full range of the application's actions to let AppArmor get a very specific picture of its activities.

  3. Let AppArmor analyze the log files generated in Step 2 by typing S in aa-genprof.

    AppArmor scans the logs it recorded during the application's run and asks you to set the access rights for each event that was logged. Either set them for each file or use globbing.

  4. Depending on the complexity of your application, it might be necessary to repeat Step 2 and Step 3. Confine the application, exercise it under the confined conditions, and process any new log events. To properly confine the full range of an application's capabilities, you might be required to repeat this procedure often.

  5. When you finish aa-genprof, your profile is set to enforce mode. The profile is applied and AppArmor restricts the application according to it.

    If you started aa-genprof on an application that had an existing profile that was in complain mode, this profile remains in learning mode upon exit of this learning cycle. For more information about changing the mode of a profile, refer to Section 24.7.3.2, “aa-complain—Entering Complain or Learning Mode” and Section 24.7.3.6, “aa-enforce—Entering Enforce Mode”.

Test your profile settings by performing every task you need with the application you confined. Normally, the confined program runs smoothly and you do not notice AppArmor activities. However, if you notice certain misbehavior with your application, check the system logs and see if AppArmor is too tightly confining your application. Depending on the log mechanism used on your system, there are several places to look for AppArmor log entries:

/var/log/audit/audit.log
The command journalctl | grep -i apparmor
The command dmesg -T

To adjust the profile, analyze the log messages relating to this application again as described in Section 24.7.3.9, “aa-logprof—Scanning the System Log”. Determine the access rights or restrictions when prompted.

Tip
Tip: For More Information

For more information about profile building and modification, refer to Chapter 21, Profile Components and Syntax, Chapter 23, Building and Managing Profiles with YaST, and Chapter 24, Building Profiles from the Command Line.

19.5 Updating Your Profiles

Software and system configurations change over time. As a result, your profile setup for AppArmor might need some fine-tuning from time to time. AppArmor checks your system log for policy violations or other AppArmor events and lets you adjust your profile set accordingly. Any application behavior that is outside of any profile definition can be addressed by aa-logprof. For more information, see Section 24.7.3.9, “aa-logprof—Scanning the System Log”.

20 Immunizing Programs

  • Filename: apparmor_whatimmunize.xml
  • ID: cha.apparmor.concept

Effective hardening of a computer system requires minimizing the number of programs that mediate privilege, then securing the programs as much as possible. With AppArmor, you only need to profile the programs that are exposed to attack in your environment, which drastically reduces the amount of work required to harden your computer. AppArmor profiles enforce policies to make sure that programs do what they are supposed to do, but nothing else.

AppArmor provides immunization technologies that protect applications from the inherent vulnerabilities they possess. After installing AppArmor, setting up AppArmor profiles, and rebooting the computer, your system becomes immunized because it begins to enforce the AppArmor security policies. Protecting programs with AppArmor is called immunizing.

Administrators need only concern themselves with the applications that are vulnerable to attacks, and generate profiles for these. Hardening a system thus comes down to building and maintaining the AppArmor profile set and monitoring any policy violations or exceptions logged by AppArmor's reporting facility.

Users should not notice AppArmor. It runs behind the scenes and does not require any user interaction. Performance is not noticeably affected by AppArmor. If some activity of the application is not covered by an AppArmor profile or if some activity of the application is prevented by AppArmor, the administrator needs to adjust the profile of this application.

AppArmor sets up a collection of default application profiles to protect standard Linux services. To protect other applications, use the AppArmor tools to create profiles for the applications that you want protected. This chapter introduces the philosophy of immunizing programs. Proceed to Chapter 21, Profile Components and Syntax, Chapter 23, Building and Managing Profiles with YaST, or Chapter 24, Building Profiles from the Command Line if you are ready to build and manage AppArmor profiles.

AppArmor provides streamlined access control for network services by specifying which files each program is allowed to read, write, and execute, and which type of network it is allowed to access. This ensures that each program does what it is supposed to do, and nothing else. AppArmor quarantines programs to protect the rest of the system from being damaged by a compromised process.

AppArmor is a host intrusion prevention or mandatory access control scheme. Previously, access control schemes were centered around users because they were built for large timeshare systems. Alternatively, modern network servers largely do not permit users to log in, but instead provide a variety of network services for users (such as Web, mail, file, and print servers). AppArmor controls the access given to network services and other programs to prevent weaknesses from being exploited.

Tip
Tip: Background Information for AppArmor

To get a more in-depth overview of AppArmor and the overall concept behind it, refer to Section 18.2, “Background Information on AppArmor Profiling”.

20.1 Introducing the AppArmor Framework

This section provides a very basic understanding of what is happening behind the scenes (and under the hood of the YaST interface) when you run AppArmor.

An AppArmor profile is a plain text file containing path entries and access permissions. See Section 21.1, “Breaking an AppArmor Profile into Its Parts” for a detailed reference profile. The directives contained in this text file are then enforced by the AppArmor routines to quarantine the process or program.

The following tools interact in the building and enforcement of AppArmor profiles and policies:

aa-status

aa-status reports various aspects of the current state of the running AppArmor confinement.

aa-unconfined

aa-unconfined detects any application running on your system that listens for network connections and is not protected by an AppArmor profile. Refer to Section 24.7.3.12, “aa-unconfined—Identifying Unprotected Processes” for detailed information about this tool.

aa-autodep

aa-autodep creates a basic framework of a profile that needs to be fleshed out before it is put to use in production. The resulting profile is loaded and put into complain mode, reporting any behavior of the application that is not (yet) covered by AppArmor rules. Refer to Section 24.7.3.1, “aa-autodep—Creating Approximate Profiles” for detailed information about this tool.

aa-genprof

aa-genprof generates a basic profile and asks you to refine this profile by executing the application and generating log events that need to be taken care of by AppArmor policies. You are guided through a series of questions to deal with the log events that have been triggered during the application's execution. After the profile has been generated, it is loaded and put into enforce mode. Refer to Section 24.7.3.8, “aa-genprof—Generating Profiles” for detailed information about this tool.

aa-logprof

aa-logprof interactively scans and reviews the log entries generated by an application that is confined by an AppArmor profile in both complain and enforced modes. It assists you in generating new entries in the profile concerned. Refer to Section 24.7.3.9, “aa-logprof—Scanning the System Log” for detailed information about this tool.

aa-easyprof

aa-easyprof provides an easy-to-use interface for AppArmor profile generation. aa-easyprof supports the use of templates and policy groups to quickly profile an application. Note that while this tool can help with policy generation, its utility is dependent on the quality of the templates, policy groups and abstractions used. aa-easyprof may create a profile that is less restricted than creating the profile with aa-genprof and aa-logprof.

aa-complain

aa-complain toggles the mode of an AppArmor profile from enforce to complain. Violations to rules set in a profile are logged, but the profile is not enforced. Refer to Section 24.7.3.2, “aa-complain—Entering Complain or Learning Mode” for detailed information about this tool.

aa-enforce

aa-enforce toggles the mode of an AppArmor profile from complain to enforce. Violations to rules set in a profile are logged and not permitted—the profile is enforced. Refer to Section 24.7.3.6, “aa-enforce—Entering Enforce Mode” for detailed information about this tool.

aa-disable

aa-disable disables the enforcement mode for one or more AppArmor profiles. This command will unload the profile from the kernel and prevent it from being loaded on AppArmor start-up. The aa-enforce and aa-complain utilities may be used to change this behavior.

aa-exec

aa-exec launches a program confined by the specified AppArmor profile and/or namespace. If both a profile and namespace are specified, the command will be confined by the profile in the new policy namespace. If only a namespace is specified, the profile name of the current confinement will be used. If neither a profile or namespace is specified, the command will be run using standard profile attachment—as if run without aa-exec.

aa-notify

aa-notify is a handy utility that displays AppArmor notifications in your desktop environment. You can also configure it to display a summary of notifications for the specified number of recent days. For more information, see Section 24.7.3.13, “aa-notify”.

20.2 Determining Programs to Immunize

Now that you have familiarized yourself with AppArmor, start selecting the applications for which to build profiles. Programs that need profiling are those that mediate privilege. The following programs have access to resources that the person using the program does not have, so they grant the privilege to the user when used:

cron Jobs

Programs that are run periodically by cron. Such programs read input from a variety of sources and can run with special privileges, sometimes with as much as root privilege. For example, cron can run /usr/sbin/logrotate daily to rotate, compress, or even mail system logs. For instructions for finding these types of programs, refer to Section 20.3, “Immunizing cron Jobs”.

Web Applications

Programs that can be invoked through a Web browser, including CGI Perl scripts, PHP pages, and more complex Web applications. For instructions for finding these types of programs, refer to Section 20.4.1, “Immunizing Web Applications”.

Network Agents

Programs (servers and clients) that have open network ports. User clients, such as mail clients and Web browsers mediate privilege. These programs run with the privilege to write to the user's home directory and they process input from potentially hostile remote sources, such as hostile Web sites and e-mailed malicious code. For instructions for finding these types of programs, refer to Section 20.4.2, “Immunizing Network Agents”.

Conversely, unprivileged programs do not need to be profiled. For example, a shell script might invoke the cp program to copy a file. Because cp does not by default have its own profile or subprofile, it inherits the profile of the parent shell script. Thus cp can copy any files that the parent shell script's profile can read and write.

20.3 Immunizing cron Jobs

To find programs that are run by cron, inspect your local cron configuration. Unfortunately, cron configuration is rather complex, so there are numerous files to inspect. Periodic cron jobs are run from these files:

/etc/crontab 
/etc/cron.d/* 
/etc/cron.daily/* 
/etc/cron.hourly/*
/etc/cron.monthly/* 
/etc/cron.weekly/*

The crontab command lists/edits the current user's crontab. To manipulate root's cron jobs, first become root, and then edit the tasks with crontab -e or list them with crontab -l.

20.4 Immunizing Network Applications

An automated method for finding network server daemons that should be profiled is to use the aa-unconfined tool.

The aa-unconfined tool uses the command netstat -nlp to inspect open ports from inside your computer, detect the programs associated with those ports, and inspect the set of AppArmor profiles that you have loaded. aa-unconfined then reports these programs along with the AppArmor profile associated with each program, or reports none (if the program is not confined).

Note
Note

If you create a new profile, you must restart the program that has been profiled to have it be effectively confined by AppArmor.

Below is a sample aa-unconfined output:

37021 /usr/sbin/sshd2 confined
   by '/usr/sbin/sshd3 (enforce)' 
4040 /usr/sbin/ntpd confined by '/usr/sbin/ntpd (enforce)' 
4373 /usr/lib/postfix/master confined by '/usr/lib/postfix/master (enforce)' 
4505 /usr/sbin/httpd2-prefork confined by '/usr/sbin/httpd2-prefork (enforce)'
646 /usr/lib/wicked/bin/wickedd-dhcp4 not confined
647 /usr/lib/wicked/bin/wickedd-dhcp6 not confined
5592 /usr/bin/ssh not confined
7146 /usr/sbin/cupsd confined by '/usr/sbin/cupsd (complain)'

1

The first portion is a number. This number is the process ID number (PID) of the listening program.

2

The second portion is a string that represents the absolute path of the listening program

3

The final portion indicates the profile confining the program, if any.

Note
Note

aa-unconfined requires root privileges and should not be run from a shell that is confined by an AppArmor profile.

aa-unconfined does not distinguish between one network interface and another, so it reports all unconfined processes, even those that might be listening to an internal LAN interface.

Finding user network client applications is dependent on your user preferences. The aa-unconfined tool detects and reports network ports opened by client applications, but only those client applications that are running at the time the aa-unconfined analysis is performed. This is a problem because network services tend to be running all the time, while network client applications tend only to be running when the user is interested in them.

Applying AppArmor profiles to user network client applications is also dependent on user preferences. Therefore, we leave the profiling of user network client applications as an exercise for the user.

To aggressively confine desktop applications, the aa-unconfined command supports a --paranoid option, which reports all processes running and the corresponding AppArmor profiles that might or might not be associated with each process. The user can then decide whether each of these programs needs an AppArmor profile.

If you have new or modified profiles, you can submit them to the <> mailing list along with a use case for the application behavior that you exercised. The AppArmor team reviews and may submit the work into SUSE Linux Enterprise Desktop. We cannot guarantee that every profile will be included, but we make a sincere effort to include as much as possible.

20.4.1 Immunizing Web Applications

To find Web applications, investigate your Web server configuration. The Apache Web server is highly configurable and Web applications can be stored in many directories, depending on your local configuration. SUSE Linux Enterprise Desktop, by default, stores Web applications in /srv/www/cgi-bin/. To the maximum extent possible, each Web application should have an AppArmor profile.

Once you find these programs, you can use the aa-genprof and aa-logprof tools to create or update their AppArmor profiles.

Because CGI programs are executed by the Apache Web server, the profile for Apache itself, usr.sbin.httpd2-prefork for Apache2 on SUSE Linux Enterprise Desktop, must be modified to add execute permissions to each of these programs. For example, adding the line /srv/www/cgi-bin/my_hit_counter.pl rPx grants Apache permission to execute the Perl script my_hit_counter.pl and requires that there be a dedicated profile for my_hit_counter.pl. If my_hit_counter.pl does not have a dedicated profile associated with it, the rule should say /srv/www/cgi-bin/my_hit_counter.pl rix to cause my_hit_counter.pl to inherit the usr.sbin.httpd2-prefork profile.

Some users might find it inconvenient to specify execute permission for every CGI script that Apache might invoke. Instead, the administrator can grant controlled access to collections of CGI scripts. For example, adding the line /srv/www/cgi-bin/*.{pl,py,pyc} rix allows Apache to execute all files in /srv/www/cgi-bin/ ending in .pl (Perl scripts) and .py or .pyc (Python scripts). As above, the ix part of the rule causes Python scripts to inherit the Apache profile, which is appropriate if you do not want to write individual profiles for each CGI script.

Note
Note

If you want the subprocess confinement module (apache2-mod-apparmor) functionality when Web applications handle Apache modules (mod_perl and mod_php), use the ChangeHat features when you add a profile in YaST or at the command line. To take advantage of the subprocess confinement, refer to Section 25.2, “Managing ChangeHat-Aware Applications”.

Profiling Web applications that use mod_perl and mod_php requires slightly different handling. In this case, the program is a script interpreted directly by the module within the Apache process, so no exec happens. Instead, the AppArmor version of Apache calls change_hat() using a subprofile (a hat) corresponding to the name of the URI requested.

Note
Note

The name presented for the script to execute might not be the URI, depending on how Apache has been configured for where to look for module scripts. If you have configured your Apache to place scripts in a different place, the different names appear in the log file when AppArmor complains about access violations. See Chapter 27, Managing Profiled Applications.

For mod_perl and mod_php scripts, this is the name of the Perl script or the PHP page requested. For example, adding this subprofile allows the localtime.php page to execute and access to the local system time and locale files:

/usr/bin/httpd2-prefork {
  # ...
  ^/cgi-bin/localtime.php {
    /etc/localtime                  r,
    /srv/www/cgi-bin/localtime.php  r,
    /usr/lib/locale/**              r,
  }
}

If no subprofile has been defined, the AppArmor version of Apache applies the DEFAULT_URI hat. This subprofile is sufficient to display a Web page. The DEFAULT_URI hat that AppArmor provides by default is the following:

^DEFAULT_URI {
    /usr/sbin/suexec2                  mixr,
    /var/log/apache2/**                rwl,
    @{HOME}/public_html                r,
    @{HOME}/public_html/**             r,
    /srv/www/htdocs                    r,
    /srv/www/htdocs/**                 r,
    /srv/www/icons/*.{gif,jpg,png}     r,
    /srv/www/vhosts                    r,
    /srv/www/vhosts/**                 r,
    /usr/share/apache2/**              r,
    /var/lib/php/sess_*                rwl 
}

To use a single AppArmor profile for all Web pages and CGI scripts served by Apache, a good approach is to edit the DEFAULT_URI subprofile. For more information on confining Web applications with Apache, see Chapter 25, Profiling Your Web Applications Using ChangeHat.

20.4.2 Immunizing Network Agents

To find network server daemons and network clients (such as fetchmail or Firefox) that need to be profiled, you should inspect the open ports on your machine. Also consider the programs that are answering on those ports, and provide profiles for as many of those programs as possible. If you provide profiles for all programs with open network ports, an attacker cannot get to the file system on your machine without passing through an AppArmor profile policy.

Scan your server for open network ports manually from outside the machine using a scanner (such as nmap), or from inside the machine using the netstat --inet -n -p command as root. Then, inspect the machine to determine which programs are answering on the discovered open ports.

Tip
Tip

Refer to the man page of the netstat command for a detailed reference of all possible options.

21 Profile Components and Syntax

  • Filename: apparmor_profiles.xml
  • ID: cha.apparmor.profiles

Building AppArmor profiles to confine an application is very straightforward and intuitive. AppArmor ships with several tools that assist in profile creation. It does not require you to do any programming or script handling. The only task that is required of the administrator is to determine a policy of strictest access and execute permissions for each application that needs to be hardened.

Updates or modifications to the application profiles are only required if the software configuration or the desired range of activities changes. AppArmor offers intuitive tools to handle profile updates and modifications.

You are ready to build AppArmor profiles after you select the programs to profile. To do so, it is important to understand the components and syntax of profiles. AppArmor profiles contain several building blocks that help build simple and reusable profile code:

Include Files

Include statements are used to pull in parts of other AppArmor profiles to simplify the structure of new profiles.

Abstractions

Abstractions are include statements grouped by common application tasks.

Program Chunks

Program chunks are include statements that contain chunks of profiles that are specific to program suites.

Capability Entries

Capability entries are profile entries for any of the POSIX.1e http://en.wikipedia.org/wiki/POSIX#POSIX.1 Linux capabilities allowing a fine-grained control over what a confined process is allowed to do through system calls that require privileges.

Network Access Control Entries

Network Access Control Entries mediate network access based on the address type and family.

Local Variable Definitions

Local variables define shortcuts for paths.

File Access Control Entries

File Access Control Entries specify the set of files an application can access.

rlimit Entries

rlimit entries set and control an application's resource limits.

For help determining the programs to profile, refer to Section 20.2, “Determining Programs to Immunize”. To start building AppArmor profiles with YaST, proceed to Chapter 23, Building and Managing Profiles with YaST. To build profiles using the AppArmor command line interface, proceed to Chapter 24, Building Profiles from the Command Line.

21.1 Breaking an AppArmor Profile into Its Parts

The easiest way of explaining what a profile consists of and how to create one is to show the details of a sample profile, in this case for a hypothetical application called /usr/bin/foo:

#include <tunables/global>1

# a comment naming the application to confine
/usr/bin/foo2 {3
   #include <abstractions/base>4

   capability setgid5,
   network inet tcp6,

   link /etc/sysconfig/foo -> /etc/foo.conf,7
   /bin/mount            ux,
   /dev/{,u}8random     r,
   /etc/ld.so.cache      r,
   /etc/foo/*            r,
   /lib/ld-*.so*         mr,
   /lib/lib*.so*         mr,
   /proc/[0-9]**         r,
   /usr/lib/**           mr,
   /tmp/                 r,9
   /tmp/foo.pid          wr,
   /tmp/foo.*            lrw,
   /@{HOME}10/.foo_file   rw,
   /@{HOME}/.foo_lock    kw,
   owner11 /shared/foo/** rw,
   /usr/bin/foobar       Cx,12
   /bin/**               Px -> bin_generic,13

   # a comment about foo's local (children) profile for /usr/bin/foobar.

   profile /usr/bin/foobar14 {
      /bin/bash          rmix,
      /bin/cat           rmix,
      /bin/more          rmix,
      /var/log/foobar*   rwl,
      /etc/foobar        r,
   }

  # foo's hat, bar.
   ^bar15 {
    /lib/ld-*.so*         mr,
    /usr/bin/bar          px,
    /var/spool/*          rwl,
   }
}

1

This loads a file containing variable definitions.

2

The normalized path to the program that is confined.

3

The curly braces ({}) serve as a container for include statements, subprofiles, path entries, capability entries, and network entries.

4

This directive pulls in components of AppArmor profiles to simplify profiles.

5

Capability entry statements enable each of the 29 POSIX.1e draft capabilities.

6

A directive determining the kind of network access allowed to the application. For details, refer to Section 21.5, “Network Access Control”.

7

A link pair rule specifying the source and the target of a link. See Section 21.7.6, “Link Pair” for more information.

8

The curly braces ({}) here allow for each of the listed possibilities, one of which is the empty string.

9

A path entry specifying what areas of the file system the program can access. The first part of a path entry specifies the absolute path of a file (including regular expression globbing) and the second part indicates permissible access modes (for example r for read, w for write, and x for execute). A whitespace of any kind (spaces or tabs) can precede the path name, but must separate the path name and the mode specifier. Spaces between the access mode and the trailing comma are optional. Find a comprehensive overview of the available access modes in Section 21.7, “File Permission Access Modes”.

10

This variable expands to a value that can be changed without changing the entire profile.

11

An owner conditional rule, granting read and write permission on files owned by the user. Refer to Section 21.7.8, “Owner Conditional Rules” for more information.

12

This entry defines a transition to the local profile /usr/bin/foobar. Find a comprehensive overview of the available execute modes in Section 21.8, “Execute Modes”.

13

A named profile transition to the profile bin_generic located in the global scope. See Section 21.8.7, “Named Profile Transitions” for details.

14

The local profile /usr/bin/foobar is defined in this section.

15

This section references a hat subprofile of the application. For more details on AppArmor's ChangeHat feature, refer to Chapter 25, Profiling Your Web Applications Using ChangeHat.

When a profile is created for a program, the program can access only the files, modes, and POSIX capabilities specified in the profile. These restrictions are in addition to the native Linux access controls.

Example:  To gain the capability CAP_CHOWN, the program must have both access to CAP_CHOWN under conventional Linux access controls (typically, be a root-owned process) and have the capability chown in its profile. Similarly, to be able to write to the file /foo/bar the program must have both the correct user ID and mode bits set in the files attributes and have /foo/bar w in its profile.

Attempts to violate AppArmor rules are recorded in /var/log/audit/audit.log if the audit package is installed, or in /var/log/messages, or only in journalctl if no traditional syslog is installed. Often AppArmor rules prevent an attack from working because necessary files are not accessible and, in all cases, AppArmor confinement restricts the damage that the attacker can do to the set of files permitted by AppArmor.

21.2 Profile Types

AppArmor knows four different types of profiles: standard profiles, unattached profiles, local profiles and hats. Standard and unattached profiles are stand-alone profiles, each stored in a file under /etc/apparmor.d/. Local profiles and hats are children profiles embedded inside of a parent profile used to provide tighter or alternate confinement for a subtask of an application.

21.2.1 Standard Profiles

The default AppArmor profile is attached to a program by its name, so a profile name must match the path to the application it is to confine.

/usr/bin/foo {
...
}

This profile will be automatically used whenever an unconfined process executes /usr/bin/foo.

21.2.2 Unattached Profiles

Unattached profiles do not reside in the file system namespace and therefore are not automatically attached to an application. The name of an unattached profile is preceded by the keyword profile. You can freely choose a profile name, except for the following limitations: the name must not begin with a : or . character. If it contains a whitespace, it must be quoted. If the name begins with a /, the profile is considered to be a standard profile, so the following two profiles are identical:

profile /usr/bin/foo {
...
}
/usr/bin/foo {
...
}

Unattached profiles are never used automatically, nor can they be transitioned to through a Px rule. They need to be attached to a program by either using a named profile transition (see Section 21.8.7, “Named Profile Transitions”) or with the change_profile rule (see Section 21.2.5, “Change rules”).

Unattached profiles are useful for specialized profiles for system utilities that generally should not be confined by a system-wide profile (for example, /bin/bash). They can also be used to set up roles or to confine a user.

21.2.3 Local Profiles

Local profiles provide a convenient way to provide specialized confinement for utility programs launched by a confined application. They are specified like standard profiles, except that they are embedded in a parent profile and begin with the profile keyword:

/parent/profile {
   ...
   profile /local/profile {
      ...
   }
}

To transition to a local profile, either use a cx rule (see Section 21.8.2, “Discrete Local Profile Execute Mode (Cx)”) or a named profile transition (see Section 21.8.7, “Named Profile Transitions”).

21.2.4 Hats

AppArmor "hats" are a local profiles with some additional restrictions and an implicit rule allowing for change_hat to be used to transition to them. Refer to Chapter 25, Profiling Your Web Applications Using ChangeHat for a detailed description.

21.2.5 Change rules

AppArmor provides change_hat and change_profile rules that control domain transitioning. change_hat are specified by defining hats in a profile, while change_profile rules refer to another profile and start with the keyword change_profile:

change_profile -> /usr/bin/foobar,

Both change_hat and change_profile provide for an application directed profile transition, without having to launch a separate application. change_profile provides a generic one way transition between any of the loaded profiles. change_hat provides for a returnable parent child transition where an application can switch from the parent profile to the hat profile and if it provides the correct secret key return to the parent profile at a later time.

change_profile is best used in situations where an application goes through a trusted setup phase and then can lower its privilege level. Any resources mapped or opened during the start-up phase may still be accessible after the profile change, but the new profile will restrict the opening of new resources, and will even limit some resources opened before the switch. Specifically, memory resources will still be available while capability and file resources (as long as they are not memory mapped) can be limited.

change_hat is best used in situations where an application runs a virtual machine or an interpreter that does not provide direct access to the applications resources (for example Apache's mod_php). Since change_hat stores the return secret key in the application's memory the phase of reduced privilege should not have direct access to memory. It is also important that file access is properly separated, since the hat can restrict accesses to a file handle but does not close it. If an application does buffering and provides access to the open files with buffering, the accesses to these files might not be seen by the kernel and hence not restricted by the new profile.

Warning
Warning: Safety of Domain Transitions

The change_hat and change_profile domain transitions are less secure than a domain transition done through an exec because they do not affect a process's memory mappings, nor do they close resources that have already been opened.

21.3 Include Statements

Include statements are directives that pull in components of other AppArmor profiles to simplify profiles. Include files retrieve access permissions for programs. By using an include, you can give the program access to directory paths or files that are also required by other programs. Using includes can reduce the size of a profile.

Include statements normally begin with a hash (#) sign. This is confusing because the same hash sign is used for comments inside profile files. Because of this, #include is treated as an include only if there is no preceding # (##include is a comment) and there is no whitespace between # and include (# include is a comment).

You can also use include without the leading #.

include "/etc/apparmor.d/abstractions/foo"

is the same as using

#include "/etc/apparmor.d/abstractions/foo"
Note
Note: No Trailing ','

Note that because includes follow the C pre-processor syntax, they do not have a trailing ',' like most AppArmor rules.

By slight changes in syntax, you can modify the behavior of include. If you use "" around the including path, you instruct the parser to do an absolute or relative path lookup.

include "/etc/apparmor.d/abstractions/foo"   # absolute path
include "abstractions/foo"   # relative path to the directory of current file

Note that when using relative path includes, when the file is included, it is considered the new current file for its includes. For example, suppose you are in the /etc/apparmor.d/bar file, then

include "abstractions/foo"

includes the file /etc/apparmor.d/abstractions/foo. If then there is

include "example"

inside the /etc/apparmor.d/abstractions/foo file, it includes /etc/apparmor.d/abstractions/example.

The use of <> specifies to try the include path (specified by -I, defaults to the /etc/apparmor.d directory) in an ordered way. So assuming the include path is

-I /etc/apparmor.d/ -I /usr/share/apparmor/

then the include statement

include <abstractions/foo>

will try /etc/apparmor.d/abstractions/foo, and if that file does not exist, the next try is /usr/share/apparmor/abstractions/foo.

Tip
Tip

The default include path can be overridden manually by passing -I to the apparmor_parser, or by setting the include paths in /etc/apparmor/parser.conf:

Include /usr/share/apparmor/
Include /etc/apparmor.d/

Multiple entries are allowed, and they are taken in the same order as when they are when using -I or --Include from the apparmor_parser command line.

If an include ends with '/', this is considered a directory include, and all files within the directory are included.

To assist you in profiling your applications, AppArmor provides three classes of includes: abstractions, program chunks and tunables.

21.3.1 Abstractions

Abstractions are includes that are grouped by common application tasks. These tasks include access to authentication mechanisms, access to name service routines, common graphics requirements, and system accounting. Files listed in these abstractions are specific to the named task. Programs that require one of these files usually also require other files listed in the abstraction file (depending on the local configuration and the specific requirements of the program). Find abstractions in /etc/apparmor.d/abstractions.

21.3.2 Program Chunks

The program-chunks directory (/etc/apparmor.d/program-chunks) contains some chunks of profiles that are specific to program suites and not generally useful outside of the suite, thus are never suggested for use in profiles by the profile wizards (aa-logprof and aa-genprof). Currently, program chunks are only available for the postfix program suite.

21.3.3 Tunables

The tunables directory (/etc/apparmor.d/tunables) contains global variable definitions. When used in a profile, these variables expand to a value that can be changed without changing the entire profile. Add all the tunables definitions that should be available to every profile to /etc/apparmor.d/tunables/global.

21.4 Capability Entries (POSIX.1e)

Capability rules are simply the word capability followed by the name of the POSIX.1e capability as defined in the capabilities(7) man page. You can list multiple capabilities in a single rule, or grant all implemented capabilities with the bare keyword capability.

capability dac_override sys_admin,   # multiple capabilities
capability,                          # grant all capabilities

21.5 Network Access Control

AppArmor allows mediation of network access based on the address type and family. The following illustrates the network access rule syntax:

network [[<domain>1][<type2>][<protocol3>]]

1

Supported domains: inet, ax25, ipx, appletalk, netrom, bridge, x25, inet6, rose, netbeui, security, key, packet, ash, econet, atmsvc, sna, irda, pppox, wanpipe, bluetooth, unix, atmpvc,netlink, llc, can, tipc, iucv, rxrpc, isdn, phonet, ieee802154, caif, alg, nfc, vsock

2

Supported types: stream, dgram, seqpacket, rdm, raw, packet

3

Supported protocols: tcp, udp, icmp

The AppArmor tools support only family and type specification. The AppArmor module emits only network DOMAIN TYPE in ACCESS DENIED messages. And only these are output by the profile generation tools, both YaST and command line.

The following examples illustrate possible network-related rules to be used in AppArmor profiles. Note that the syntax of the last two are not currently supported by the AppArmor tools.

network1,
network inet2,
network inet63,
network inet stream4,
network inet tcp5,
network tcp6,

1

Allow all networking. No restrictions applied with regard to domain, type, or protocol.

2

Allow general use of IPv4 networking.

3

Allow general use of IPv6 networking.

4

Allow the use of IPv4 TCP networking.

5

Allow the use of IPv4 TCP networking, paraphrasing the rule above.

6

Allow the use of both IPv4 and IPv6 TCP networking.

21.6 Profile Names, Flags, Paths, and Globbing

A profile is usually attached to a program by specifying a full path to the program's executable. For example in the case of a standard profile (see Section 21.2.1, “Standard Profiles”), the profile is defined by

/usr/bin/foo { ... }

The following sections describe several useful techniques that can be applied when naming a profile or putting a profile in the context of other existing ones, or specifying file paths.

AppArmor explicitly distinguishes directory path names from file path names. Use a trailing / for any directory path that needs to be explicitly distinguished:

/some/random/example/* r

Allow read access to files in the /some/random/example directory.

/some/random/example/ r

Allow read access to the directory only.

/some/**/ r

Give read access to any directories below /some (but not /some/ itself).

/some/random/example/** r

Give read access to files and directories under /some/random/example (but not /some/random/example/ itself).

/some/random/example/**[^/] r

Give read access to files under /some/random/example. Explicitly exclude directories ([^/]).

Globbing (or regular expression matching) is when you modify the directory path using wild cards to include a group of files or subdirectories. File resources can be specified with a globbing syntax similar to that used by popular shells, such as csh, Bash, and zsh.

*

Substitutes for any number of any characters, except /.

Example: An arbitrary number of file path elements.

**

Substitutes for any number of characters, including /.

Example: An arbitrary number of path elements, including entire directories.

?

Substitutes for any single character, except /.

[abc]

Substitutes for the single character a, b, or c.

Example: a rule that matches /home[01]/*/.plan allows a program to access .plan files for users in both /home0 and /home1.

[a-c]

Substitutes for the single character a, b, or c.

{ab,cd}

Expands to one rule to match ab and one rule to match cd.

Example: a rule that matches /{usr,www}/pages/** grants access to Web pages in both /usr/pages and /www/pages.

[^a]

Substitutes for any character except a.

21.6.1 Profile Flags

Profile flags control the behavior of the related profile. You can add profile flags to the profile definition by editing it manually, see the following syntax:

/path/to/profiled/binary flags=(list_of_flags) {
  [...]
}

You can use multiple flags separated by a comma ',' or space ' '. There are three basic types of profile flags: mode, relative, and attach flags.

Mode flag is complain (illegal accesses are allowed and logged). If it is omitted, the profile is in enforce mode (enforces the policy).

Tip
Tip

A more flexible way of setting the whole profile into complain mode is to create a symbolic link from the profile file inside the /etc/apparmor.d/force-complain/ directory.

ln -s /etc/apparmor.d/bin.ping /etc/apparmor.d/force-complain/bin.ping

Relative flags are chroot_relative (states that the profile is relative to the chroot instead of namespace) or namespace_relative (the default, with the path being relative to outside the chroot). They are mutually exclusive.

Attach flags consist of two pairs of mutually exclusive flags: attach_disconnected or no_attach_disconnected (determine if path names resolved to be outside of the namespace are attached to the root, which means they have the '/' character at the beginning), and chroot_attach or chroot_no_attach (control path name generation when in a chroot environment while a file is accessed that is external to the chroot but within the namespace).

21.6.2 Using Variables in Profiles

AppArmor allows to use variables holding paths in profiles. Use global variables to make your profiles portable and local variables to create shortcuts for paths.

A typical example of when global variables come in handy are network scenarios in which user home directories are mounted in different locations. Instead of rewriting paths to home directories in all affected profiles, you only need to change the value of a variable. Global variables are defined under /etc/apparmor.d/tunables and need to be made available via an include statement. Find the variable definitions for this use case (@{HOME} and @{HOMEDIRS}) in the /etc/apparmor.d/tunables/home file.

Local variables are defined at the head of a profile. This is useful to provide the base of for a chrooted path, for example:

@{CHROOT_BASE}=/tmp/foo
/sbin/rsyslogd {
...
# chrooted applications
@{CHROOT_BASE}/var/lib/*/dev/log w,
@{CHROOT_BASE}/var/log/** w,
...
}

In the following example, while @{HOMEDIRS} lists where all the user home directories are stored, @{HOME} is a space-separated list of home directories. Later on, @{HOMEDIRS} is expanded by two new specific places where user home directories are stored.

@{HOMEDIRS}=/home/
@{HOME}=@{HOMEDIRS}/*/ /root/
[...]
@{HOMEDIRS}+=/srv/nfs/home/ /mnt/home/
Note
Note

With the current AppArmor tools, variables can only be used when manually editing and maintaining a profile.

21.6.3 Pattern Matching

Profile names can contain globbing expressions allowing the profile to match against multiple binaries.

The following example is valid for systems where the foo binary resides either in /usr/bin or /bin.

/{usr/,}bin/foo { ... }

In the following example, when matching against the executable /bin/foo, the /bin/foo profile is an exact match so it is chosen. For the executable /bin/fat, the profile /bin/foo does not match, and because the /bin/f* profile is more specific (less general) than /bin/**, the /bin/f* profile is chosen.

/bin/foo { ... }

/bin/f*  { ... }

/bin/**  { ... }

For more information on profile name globbing examples, see the man page of AppArmor, man 5 apparmor.d,, section Globbing.

21.6.4 Namespaces

Namespaces are used to provide different profiles sets. Say one for the system, another for a chroot environment or container. Namespaces are hierarchical—a namespace can see its children but a child cannot see its parent. Namespace names start with a colon : followed by an alphanumeric string, a trailing colon : and an optional double slash //, such as

:childNameSpace://

Profiles loaded to a child namespace will be prefixed with their namespace name (viewed from a parent's perspective):

:childNameSpace://apache

Namespaces can be entered via the change_profile API, or named profile transitions:

/path/to/executable px -> :childNameSpace://apache

21.6.5 Profile Naming and Attachment Specification

Profiles can have a name, and an attachment specification. This allows for profiles with a logical name that can be more meaningful to users/administrators than a profile name that contains pattern matching (see Section 21.6.3, “Pattern Matching”). For example, the default profile

/** { ... }

can be named

profile default /** { ... }

Also, a profile with pattern matching can be named. For example:

/usr/lib/firefox-3.*/firefox-*bin { ... }

can be named

profile firefox /usr/lib/firefox-3.*/firefox-*bin { ... }

21.6.6 Alias Rules

Alias rules provide an alternative way to manipulate profile path mappings to site specific layouts. They are an alternative form of path rewriting to using variables, and are done post variable resolution. The alias rule says to treat rules that have the same source prefix as if the rules are at target prefix.

alias /home/ -> /usr/home/

All the rules that have a prefix match to /home/ will provide access to /usr/home/. For example

/home/username/** r,

allows as well access to

/usr/home/username/** r,

Aliases provide a quick way of remapping rules without the need to rewrite them. They keep the source path still accessible—in our example, the alias rule keeps the paths under /home/ still accessible.

With the alias rule, you can point to multiple targets at the same time.

alias /home/ -> /usr/home/
alias /home/ -> /mnt/home/
Note
Note

With the current AppArmor tools, alias rules can only be used when manually editing and maintaining a profile.

Tip
Tip

Insert global alias definitions in the file /etc/apparmor.d/tunables/alias.

21.7 File Permission Access Modes

File permission access modes consist of combinations of the following modes:

r

Read mode

w

Write mode (mutually exclusive to a)

a

Append mode (mutually exclusive to w)

k

File locking mode

l

Link mode

link FILE -> TARGET

Link pair rule (cannot be combined with other access modes)

21.7.1 Read Mode (r)

Allows the program to have read access to the resource. Read access is required for shell scripts and other interpreted content and determines if an executing process can core dump.

21.7.2 Write Mode (w)

Allows the program to have write access to the resource. Files must have this permission if they are to be unlinked (removed).

21.7.3 Append Mode (a)

Allows a program to write to the end of a file. In contrast to the w mode, the append mode does not include the ability to overwrite data, to rename, or to remove a file. The append permission is typically used with applications who need to be able to write to log files, but which should not be able to manipulate any existing data in the log files. As the append permission is a subset of the permissions associated with the write mode, the w and a permission flags cannot be used together and are mutually exclusive.

21.7.4 File Locking Mode (k)

The application can take file locks. Former versions of AppArmor allowed files to be locked if an application had access to them. By using a separate file locking mode, AppArmor makes sure locking is restricted only to those files which need file locking and tightens security as locking can be used in several denial of service attack scenarios.

21.7.7 Optional allow and file Rules

The allow prefix is optional, and it is idiomatically implied if not specified and the deny (see Section 21.7.9, “Deny Rules”) keyword is not used.

allow file /example r,
allow /example r,
allow network,

You can also use the optional file keyword. If you omit it and there are no other rule types that start with a keyword, such as network or mount, it is automatically implied.

file /example/rule r,

is equivalent to

/example/rule r,

The following rule grants access to all files:

file,

which is equal to

/** rwmlk,

File rules can use leading or trailing permissions. The permissions should not be specified as a trailing permission, but rather used at the start of the rule. This is important in that it makes file rules behave like any other rule types.

/path rw,            # old style
rw /path,            # leading permission
file rw /path,       # with explicit 'file' keyword
allow file rw /path, # optional 'allow' keyword added

21.7.8 Owner Conditional Rules

The file rules can be extended so that they can be conditional upon the user being the owner of the file (the fsuid needs to match the file's uid). For this purpose the owner keyword is put in front of the rule. Owner conditional rules accumulate like regular file rules do.

owner /home/*/** rw

When using file ownership conditions with link rules the ownership test is done against the target file so the user must own the file to be able to link to it.

Note
Note: Precedence of Regular File Rules

Owner conditional rules are considered a subset of regular file rules. If a regular file rule overlaps with an owner conditional file rule, the rules are merged. Consider the following example.

/foo r,
owner /foo rw,  # or w,

The rules are merged—it results in r for everybody, and w for the owner only.

Tip
Tip

To address everybody but the owner of the file, use the keyword other.

owner /foo rw,
other /foo r,

21.7.9 Deny Rules

Deny rules can be used to annotate or quiet known rejects. The profile generating tools will not ask about a known reject treated with a deny rule. Such a reject will also not show up in the audit logs when denied, keeping the log files lean. If this is not desired, put the keyword audit in front of the deny entry.

It is also possible to use deny rules in combination with allow rules. This allows you to specify a broad allow rule, and then subtract a few known files that should not be allowed. Deny rules can also be combined with owner rules, to deny files owned by the user. The following example allows read/write access to everything in a users directory except write access to the .ssh/ files:

deny /home/*/.ssh/** w,
owner /home/*/** rw,

The extensive use of deny rules is generally not encouraged, because it makes it much harder to understand what a profile does. However a judicious use of deny rules can simplify profiles. Therefore the tools only generate profiles denying specific files and will not use globbing in deny rules. Manually edit your profiles to add deny rules using globbing. Updating such profiles using the tools is safe, because the deny entries will not be touched.

21.8 Execute Modes

Execute modes, also named profile transitions, consist of the following modes:

Px

Discrete profile execute mode

Cx

Discrete local profile execute mode

Ux

Unconfined execute mode

ix

Inherit execute mode

m

Allow PROT_EXEC with mmap(2) calls

21.8.1 Discrete Profile Execute Mode (Px)

This mode requires that a discrete security profile is defined for a resource executed at an AppArmor domain transition. If there is no profile defined, the access is denied.

Incompatible with Ux, ux, px, and ix.

21.8.2 Discrete Local Profile Execute Mode (Cx)

As Px, but instead of searching the global profile set, Cx only searches the local profiles of the current profile. This profile transition provides a way for an application to have alternate profiles for helper applications.

Note
Note: Limitations of the Discrete Local Profile Execute Mode (Cx)

Currently, Cx transitions are limited to top level profiles and cannot be used in hats and children profiles. This restriction will be removed in the future.

Incompatible with Ux, ux, Px, px, cx, and ix.

21.8.3 Unconfined Execute Mode (Ux)

Allows the program to execute the resource without any AppArmor profile applied to the executed resource. This mode is useful when a confined program needs to be able to perform a privileged operation, such as rebooting the machine. By placing the privileged section in another executable and granting unconfined execution rights, it is possible to bypass the mandatory constraints imposed on all confined processes. Allowing a root process to go unconfined means it can change AppArmor policy itself. For more information about what is constrained, see the apparmor(7) man page.

This mode is incompatible with ux, px, Px, and ix.

21.8.4 Unsafe Exec Modes

Use the lowercase versions of exec modes—px, cx, ux—only in very special cases. They do not scrub the environment of variables such as LD_PRELOAD. As a result, the calling domain may have an undue amount of influence over the called resource. Use these modes only if the child absolutely must be run unconfined and LD_PRELOAD must be used. Any profile using such modes provides negligible security. Use at your own risk.

21.8.5 Inherit Execute Mode (ix)

ix prevents the normal AppArmor domain transition on execve(2) when the profiled program executes the named program. Instead, the executed resource inherits the current profile.

This mode is useful when a confined program needs to call another confined program without gaining the permissions of the target's profile or losing the permissions of the current profile. There is no version to scrub the environment because ix executions do not change privileges.

Incompatible with cx, ux, and px. Implies m.

21.8.6 Allow Executable Mapping (m)

This mode allows a file to be mapped into memory using mmap(2)'s PROT_EXEC flag. This flag marks the pages executable. It is used on some architectures to provide non executable data pages, which can complicate exploit attempts. AppArmor uses this mode to limit which files a well-behaved program (or all programs on architectures that enforce non executable memory access controls) may use as libraries, to limit the effect of invalid -L flags given to ld(1) and LD_PRELOAD, LD_LIBRARY_PATH, given to ld.so(8).

21.8.7 Named Profile Transitions

By default, the px and cx (and their clean exec variants, too) transition to a profile whose name matches the executable name. With named profile transitions, you can specify a profile to be transitioned to. This is useful if multiple binaries need to share a single profile, or if they need to use a different profile than their name would specify. Named profile transitions can be used with cx, Cx, px and Px. Currently there is a limit of twelve named profile transitions per profile.

Named profile transitions use -> to indicate the name of the profile that needs to be transitioned to:

/usr/bin/foo
{
  /bin/** px -> shared_profile,
  ...
  /usr/*bash cx -> local_profile,
  ...
  profile local_profile
  {
    ...
  }
}
Note
Note: Difference Between Normal and Named Transitions

When used with globbing, normal transitions provide a one to many relationship—/bin/** px will transition to /bin/ping, /bin/cat, etc, depending on the program being run.

Named transitions provide a many to one relationship—all programs that match the rule regardless of their name will transition to the specified profile.

Named profile transitions show up in the log as having the mode Nx. The name of the profile to be changed to is listed in the name2 field.

21.8.8 Fallback Modes for Profile Transitions

The px and cx transitions specify a hard dependency—if the specified profile does not exist, the exec will fail. With the inheritance fallback, the execution will succeed but inherit the current profile. To specify inheritance fallback, ix is combined with cx, Cx, px and Px into the modes cix, Cix, pix and Pix.

/path Cix -> profile_name,

or

Cix /path -> profile_name,

where -> profile_name is optional.

The same applies if you add the unconfined ux mode, where the resulting modes are cux, CUx, pux and PUx. These modes allow falling back to unconfined when the specified profile is not found.

/path PUx -> profile_name,

or

PUx /path -> profile_name,

where -> profile_name is optional.

The fallback modes can be used with named profile transitions, too.

21.8.9 Variable Settings in Execution Modes

When choosing one of the Px, Cx or Ux execution modes, take into account that the following environment variables are removed from the environment before the child process inherits it. As a consequence, applications or processes relying on any of these variables do not work anymore if the profile applied to them carries Px, Cx or Ux flags:

  • GCONV_PATH

  • GETCONF_DIR

  • HOSTALIASES

  • LD_AUDIT

  • LD_DEBUG

  • LD_DEBUG_OUTPUT

  • LD_DYNAMIC_WEAK

  • LD_LIBRARY_PATH

  • LD_ORIGIN_PATH

  • LD_PRELOAD

  • LD_PROFILE

  • LD_SHOW_AUXV

  • LD_USE_LOAD_BIAS

  • LOCALDOMAIN

  • LOCPATH

  • MALLOC_TRACE

  • NLSPATH

  • RESOLV_HOST_CONF

  • RES_OPTIONS

  • TMPDIR

  • TZDIR

21.8.10 safe and unsafe Keywords

You can use the safe and unsafe keywords for rules instead of using the case modifier of execution modes. For example

/example_rule Px,

is the same as any of the following

safe /example_rule px,
safe /example_rule Px,
safe px /example_rule,
safe Px /example_rule,

and the rule

/example_rule px,

is the same as any of

unsafe /example_rule px,
unsafe /example_rule Px,
unsafe px /example_rule,
unsafe Px /example_rule,

The safe/unsafe keywords are mutually exclusive and can be used in a file rule after the owner keyword, so the order of rule keywords is

[audit] [deny] [owner] [safe|unsafe] file_rule

21.9 Resource Limit Control

AppArmor can set and control an application's resource limits (rlimits, also known as ulimits). By default, AppArmor does not control application's rlimits, and it will only control those limits specified in the confining profile. For more information about resource limits, refer to the setrlimit(2), ulimit(1), or ulimit(3) man pages.

AppArmor leverages the system's rlimits and as such does not provide an additional auditing that would normally occur. It also cannot raise rlimits set by the system, AppArmor rlimits can only reduce an application's current resource limits.

The values will be inherited by the children of a process and will remain even if a new profile is transitioned to or the application becomes unconfined. So when an application transitions to a new profile, that profile can further reduce the application's rlimits.

AppArmor's rlimit rules will also provide mediation of setting an application's hard limits, should it try to raise them. The application cannot raise its hard limits any further than specified in the profile. The mediation of raising hard limits is not inherited as the set value is, so that when the application transitions to a new profile it is free to raise its limits as specified in the profile.

AppArmor's rlimit control does not affect an application's soft limits beyond ensuring that they are less than or equal to the application's hard limits.

AppArmor's hard limit rules have the general form of:

set rlimit RESOURCE <= value,

where RESOURCE and VALUE are to be replaced with the following values:

cpu

CPU time limit in seconds.

fsize, data, stack, core, rss, as, memlock, msgqueue

a number in bytes, or a number with a suffix where the suffix can be K/KB (kilobytes), M/MB (megabytes), G/GB (gigabytes), for example

rlimit data <= 100M,
fsize, nofile, locks, sigpending, nproc*, rtprio

a number greater or equal to 0

nice

a value between -20 and 19

*The nproc rlimit is handled different than all the other rlimits. Instead of indicating the standard process rlimit it controls the maximum number of processes that can be running under the profile at any time. When the limit is exceeded the creation of new processes under the profile will fail until the number of currently running processes is reduced.

Note
Note

Currently the tools cannot be used to add rlimit rules to profiles. The only way to add rlimit controls to a profile is to manually edit the profile with a text editor. The tools will still work with profiles containing rlimit rules and will not remove them, so it is safe to use the tools to update profiles containing them.

21.10 Auditing Rules

AppArmor provides the ability to audit given rules so that when they are matched an audit message will appear in the audit log. To enable audit messages for a given rule, the audit keyword is put in front of the rule:

audit /etc/foo/*        rw,

If it is desirable to audit only a given permission the rule can be split into two rules. The following example will result in audit messages when files are opened for writing, but not when they are opened for reading:

audit /etc/foo/*  w,
/etc/foo/*        r,
Note
Note

Audit messages are not generated for every read or write of a file but only when a file is opened for reading or writing.

Audit control can be combined with owner/other conditional file rules to provide auditing when users access files they own/do not own:

audit owner /home/*/.ssh/**       rw,
audit other /home/*/.ssh/**       r,

22 AppArmor Profile Repositories

  • Filename: apparmor_repositories.xml
  • ID: cha.apparmor.repos

AppArmor ships with a set of profiles enabled by default. These are created by the AppArmor developers, and are stored in /etc/apparmor.d. In addition to these profiles, SUSE Linux Enterprise Desktop ships profiles for individual applications together with the relevant application. These profiles are not enabled by default, and reside under another directory than the standard AppArmor profiles, /etc/apparmor/profiles/extras.

The AppArmor tools (YaST, aa-genprof and aa-logprof) support the use of a local repository. Whenever you start to create a new profile from scratch, and there already is an inactive profile in your local repository, you are asked whether you want to use the existing inactive one from /etc/apparmor/profiles/extras and whether you want to base your efforts on it. If you decide to use this profile, it gets copied over to the directory of profiles enabled by default (/etc/apparmor.d) and loaded whenever AppArmor is started. Any further adjustments will be done to the active profile under /etc/apparmor.d.

23 Building and Managing Profiles with YaST

  • Filename: apparmor_profiles_yast.xml
  • ID: cha.apparmor.yast

YaST provides a basic way to build profiles and manage AppArmor® profiles. It provides two interfaces: a graphical one and a text-based one. The text-based interface consumes less resources and bandwidth, making it a better choice for remote administration, or for times when a local graphical environment is inconvenient. Although the interfaces have differing appearances, they offer the same functionality in similar ways. Another alternative is to use AppArmor commands, which can control AppArmor from a terminal window or through remote connections. The command line tools are described in Chapter 24, Building Profiles from the Command Line.

Start YaST from the main menu and enter your root password when prompted for it. Alternatively, start YaST by opening a terminal window, logging in as root, and entering yast2 for the graphical mode or yast for the text-based mode.

In the Security and Users section, there is an AppArmor Configuration icon. Click it to launch the AppArmor YaST module.

23.1 Manually Adding a Profile

AppArmor enables you to create an AppArmor profile by manually adding entries into the profile. Select the application for which to create a profile, then add entries.

  1. Start YaST, select AppArmor Configuration, and click Manually Add Profile in the main window.

  2. Browse your system to find the application for which to create a profile.

  3. When you find the application, select it and click Open. A basic, empty profile appears in the AppArmor Profile Dialog window.

  4. In AppArmor Profile Dialog, add, edit, or delete AppArmor profile entries by clicking the corresponding buttons and referring to Section 23.2.1, “Adding an Entry”, Section 23.2.2, “Editing an Entry”, or Section 23.2.3, “Deleting an Entry”.

  5. When finished, click Done.

23.2 Editing Profiles

Tip
Tip

YaST offers basic manipulation for AppArmor profiles, such as creating or editing. However, the most straightforward way to edit an AppArmor profile is to use a text editor such as vi:

root # vi /etc/apparmor.d/usr.sbin.httpd2-prefork
Tip
Tip

The vi editor also includes syntax (error) highlighting and syntax error highlighting, which visually warns you when the syntax of the edited AppArmor profile is wrong.

AppArmor enables you to edit AppArmor profiles manually by adding, editing, or deleting entries. To edit a profile, proceed as follows:

  1. Start YaST, select AppArmor Configuration, and click Manage Existing Profiles in the main window.

    Choose the profile to edit
  2. From the list of profiled applications, select the profile to edit.

  3. Click Edit. The AppArmor Profile Dialog window displays the profile.

    AppArmor profile dialog
  4. In the AppArmor Profile Dialog window, add, edit, or delete AppArmor profile entries by clicking the corresponding buttons and referring to Section 23.2.1, “Adding an Entry”, Section 23.2.2, “Editing an Entry”, or Section 23.2.3, “Deleting an Entry”.

  5. When you are finished, click Done.

  6. In the pop-up that appears, click Yes to confirm your changes to the profile and reload the AppArmor profile set.

Tip
Tip: Syntax Checking in AppArmor

AppArmor contains a syntax check that notifies you of any syntax errors in profiles you are trying to process with the YaST AppArmor tools. If an error occurs, edit the profile manually as root and reload the profile set with systemctl reload apparmor.

23.2.1 Adding an Entry

The Add Entry button in the AppArmor Profile Window lists types of entries you can add to the AppArmor profile.

From the list, select one of the following:

File

In the pop-up window, specify the absolute path of a file, including the type of access permitted. When finished, click OK.

You can use globbing if necessary. For globbing information, refer to Section 21.6, “Profile Names, Flags, Paths, and Globbing”. For file access permission information, refer to Section 21.7, “File Permission Access Modes”.

Select a file to add
Directory

In the pop-up window, specify the absolute path of a directory, including the type of access permitted. You can use globbing if necessary. When finished, click OK.

For globbing information, refer to Section 21.6, “Profile Names, Flags, Paths, and Globbing”. For file access permission information, refer to Section 21.7, “File Permission Access Modes”.

Select a directory to add
Network Rule

In the pop-up window, select the appropriate network family and the socket type. For more information, refer to Section 21.5, “Network Access Control”.

Select capabilities
Capability

In the pop-up window, select the appropriate capabilities. These are statements that enable each of the 32 POSIX.1e capabilities. Refer to Section 21.4, “Capability Entries (POSIX.1e)” for more information about capabilities. When finished making your selections, click OK.

Select capabilities
Include File

In the pop-up window, browse to the files to use as includes. Includes are directives that pull in components of other AppArmor profiles to simplify profiles. For more information, refer to Section 21.3, “Include Statements”.

Hat

In the pop-up window, specify the name of the subprofile (hat) to add to your current profile and click Create Hat. For more information, refer to Chapter 25, Profiling Your Web Applications Using ChangeHat.

23.2.2 Editing an Entry

When you select Edit Entry, a pop-up window opens. From here, edit the selected entry.

In the pop-up window, edit the entry you need to modify. You can use globbing if necessary. When finished, click OK.

For globbing information, refer to Section 21.6, “Profile Names, Flags, Paths, and Globbing”. For access permission information, refer to Section 21.7, “File Permission Access Modes”.

23.2.3 Deleting an Entry

To delete an entry in a given profile, select Delete Entry. AppArmor removes the selected profile entry.

23.3 Deleting a Profile

AppArmor enables you to delete an AppArmor profile manually. Simply select the application for which to delete a profile then delete it as follows:

  1. Start YaST, select AppArmor Configuration, and click Manage Existing Profiles in the main window.

  2. Select the profile to delete.

  3. Click Delete.

  4. In the pop-up that opens, click Yes to delete the profile and reload the AppArmor profile set.

23.4 Managing AppArmor

You can change the status of AppArmor by enabling or disabling it. Enabling AppArmor protects your system from potential program exploitation. Disabling AppArmor, even if your profiles have been set up, removes protection from your system. To change the status of AppArmor, start YaST, select AppArmor Configuration, and click Settings in the main window.

The AppArmor control panel

To change the status of AppArmor, continue as described in Section 23.4.1, “Changing AppArmor Status”. To change the mode of individual profiles, continue as described in Section 23.4.2, “Changing the Mode of Individual Profiles”.

23.4.1 Changing AppArmor Status

When you change the status of AppArmor, set it to enabled or disabled. When AppArmor is enabled, it is installed, running, and enforcing the AppArmor security policies.

  1. Start YaST, select AppArmor Configuration, and click Settings in the main window.

  2. Enable AppArmor by checking Enable AppArmor or disable AppArmor by deselecting it.

  3. Click Done in the AppArmor Configuration window.

Tip
Tip

You always need to restart running programs to apply the profiles to them.

23.4.2 Changing the Mode of Individual Profiles

AppArmor can apply profiles in two different modes. In complain mode, violations of AppArmor profile rules, such as the profiled program accessing files not permitted by the profile, are detected. The violations are permitted, but also logged. This mode is convenient for developing profiles and is used by the AppArmor tools for generating profiles. Loading a profile in enforce mode enforces the policy defined in the profile, and reports policy violation attempts to rsyslogd (or auditd or journalctl, depending on system configuration).

The Profile Mode Configuration dialog allows you to view and edit the mode of currently loaded AppArmor profiles. This feature is useful for determining the status of your system during profile development. During systemic profiling (see Section 24.7.2, “Systemic Profiling”), you can use this tool to adjust and monitor the scope of the profiles for which you are learning behavior.

To edit an application's profile mode, proceed as follows:

  1. Start YaST, select AppArmor Configuration, and click Settings in the main window.

  2. In the Configure Profile Modes section, select Configure.

  3. Select the profile for which to change the mode.

  4. Select Toggle Mode to set this profile to complain mode or to enforce mode.

  5. Apply your settings and leave YaST with Done.

To change the mode of all profiles, use Set All to Enforce or Set All to Complain.

Tip
Tip: Listing the Profiles Available

By default, only active profiles are listed (any profile that has a matching application installed on your system). To set up a profile before installing the respective application, click Show All Profiles and select the profile to configure from the list that appears.

24 Building Profiles from the Command Line

  • Filename: apparmor_profiles_man.xml
  • ID: cha.apparmor.commandline

AppArmor® provides the user the ability to use a command line interface rather than a graphical interface to manage and configure the system security. Track the status of AppArmor and create, delete, or modify AppArmor profiles using the AppArmor command line tools.

Tip
Tip: Background Information

Before starting to manage your profiles using the AppArmor command line tools, check out the general introduction to AppArmor given in Chapter 20, Immunizing Programs and Chapter 21, Profile Components and Syntax.

24.1 Checking the AppArmor Status

AppArmor can be in any one of three states:

Unloaded

AppArmor is not activated in the kernel.

Running

AppArmor is activated in the kernel and is enforcing AppArmor program policies.

Stopped

AppArmor is activated in the kernel, but no policies are enforced.

Detect the state of AppArmor by inspecting /sys/kernel/security/apparmor/profiles. If cat /sys/kernel/security/apparmor/profiles reports a list of profiles, AppArmor is running. If it is empty and returns nothing, AppArmor is stopped. If the file does not exist, AppArmor is unloaded.

Manage AppArmor with systemctl. It lets you perform the following operations:

sudo systemctl start apparmor

Behavior depends on the state of AppArmor. If it is not activated, start activates and starts it, putting it in the running state. If it is stopped, start causes the re-scan of AppArmor profiles usually found in /etc/apparmor.d and puts AppArmor in the running state. If AppArmor is already running, start reports a warning and takes no action.

Note
Note: Already Running Processes

Already running processes need to be restarted to apply the AppArmor profiles on them.

sudo systemctl stop apparmor

Stops AppArmor if it is running by removing all profiles from kernel memory, effectively disabling all access controls, and putting AppArmor into the stopped state. If the AppArmor is already stopped, stop tries to unload the profiles again, but nothing happens.

sudo systemctl reload apparmor

Causes the AppArmor module to re-scan the profiles in /etc/apparmor.d without unconfining running processes. Freshly created profiles are enforced and recently deleted ones are removed from the /etc/apparmor.d directory.

24.2 Building AppArmor Profiles

The AppArmor module profile definitions are stored in the /etc/apparmor.d directory as plain text files. For a detailed description of the syntax of these files, refer to Chapter 21, Profile Components and Syntax.

All files in the /etc/apparmor.d directory are interpreted as profiles and are loaded as such. Renaming files in that directory is not an effective way of preventing profiles from being loaded. You must remove profiles from this directory to prevent them from being read and evaluated effectively, or call aa-disable on the profile, which will create a symbolic link in /etc/apparmor.d/disabled/.

You can use a text editor, such as vi, to access and make changes to these profiles. The following sections contain detailed steps for building profiles:

Adding or Creating AppArmor Profiles

Refer to Section 24.3, “Adding or Creating an AppArmor Profile”

Editing AppArmor Profiles

Refer to Section 24.4, “Editing an AppArmor Profile”

Deleting AppArmor Profiles

Refer to Section 24.6, “Deleting an AppArmor Profile”

24.3 Adding or Creating an AppArmor Profile

To add or create an AppArmor profile for an application, you can use a systemic or stand-alone profiling method, depending on your needs. Learn more about these two approaches in Section 24.7, “Two Methods of Profiling”.

24.4 Editing an AppArmor Profile

The following steps describe the procedure for editing an AppArmor profile:

  1. If you are not currently logged in as root, enter su in a terminal window.

  2. Enter the root password when prompted.

  3. Go to the profile directory with cd /etc/apparmor.d/.

  4. Enter ls to view all profiles currently installed.

  5. Open the profile to edit in a text editor, such as vim.

  6. Make the necessary changes, then save the profile.

  7. Restart AppArmor by entering systemctl reload apparmor in a terminal window.

24.5 Unloading Unknown AppArmor Profiles

Warning
Warning: Danger of Unloading Wanted Profiles

aa-remove-unkown will unload all profiles that are not stored in /etc/apparmor.d, for example automatically generated LXD profiles. This may compromise the security of the system. Use the -n parameter to list all profiles that will be unloaded.

To unload all profiles that are not longer in /etc/apparmor.d/ AppArmor profiles, run:

tux > sudo aa-remove-unknown

You can print a list of profiles that will be removed:

tux > sudo aa-remove-unknown -n

24.6 Deleting an AppArmor Profile

The following steps describe the procedure for deleting an AppArmor profile.

  1. Remove the AppArmor definition from the kernel:

    tux > sudo apparmor_parser -R /etc/apparmor.d/PROFILE
  2. Remove the definition file:

    tux > sudo rm /etc/apparmor.d/PROFILE
        tux > sudo rm /var/lib/apparmor/cache/PROFILE

24.7 Two Methods of Profiling

Given the syntax for AppArmor profiles in Chapter 21, Profile Components and Syntax, you could create profiles without using the tools. However, the effort involved would be substantial. To avoid such a situation, use the AppArmor tools to automate the creation and refinement of profiles.

There are two ways to approach AppArmor profile creation. Tools are available for both methods.

Stand-Alone Profiling

A method suitable for profiling small applications that have a finite runtime, such as user client applications like mail clients. For more information, refer to Section 24.7.1, “Stand-Alone Profiling”.

Systemic Profiling

A method suitable for profiling many programs at once and for profiling applications that may run for days, weeks, or continuously across reboots, such as network server applications like Web servers and mail servers. For more information, refer to Section 24.7.2, “Systemic Profiling”.

Automated profile development becomes more manageable with the AppArmor tools:

  1. Decide which profiling method suits your needs.

  2. Perform a static analysis. Run either aa-genprof or aa-autodep, depending on the profiling method chosen.

  3. Enable dynamic learning. Activate learning mode for all profiled programs.

24.7.1 Stand-Alone Profiling

Stand-alone profile generation and improvement is managed by a program called aa-genprof. This method is easy because aa-genprof takes care of everything, but is limited because it requires aa-genprof to run for the entire duration of the test run of your program (you cannot reboot the machine while you are still developing your profile).

To use aa-genprof for the stand-alone method of profiling, refer to Section 24.7.3.8, “aa-genprof—Generating Profiles”.

24.7.2 Systemic Profiling

This method is called systemic profiling because it updates all of the profiles on the system at once, rather than focusing on the one or few targeted by aa-genprof or stand-alone profiling. With systemic profiling, profile construction and improvement are somewhat less automated, but more flexible. This method is suitable for profiling long-running applications whose behavior continues after rebooting, or many programs at once.

Build an AppArmor profile for a group of applications as follows:

  1. Create profiles for the individual programs that make up your application.

    Although this approach is systemic, AppArmor only monitors those programs with profiles and their children. To get AppArmor to consider a program, you must at least have aa-autodep create an approximate profile for it. To create this approximate profile, refer to Section 24.7.3.1, “aa-autodep—Creating Approximate Profiles”.

  2. Put relevant profiles into learning or complain mode.

    Activate learning or complain mode for all profiled programs by entering

    aa-complain /etc/apparmor.d/*

    in a terminal window while logged in as root. This functionality is also available through the YaST Profile Mode module, described in Section 23.4.2, “Changing the Mode of Individual Profiles”.

    When in learning mode, access requests are not blocked, even if the profile dictates that they should be. This enables you to run through several tests (as shown in Step 3) and learn the access needs of the program so it runs properly. With this information, you can decide how secure to make the profile.

    Refer to Section 24.7.3.2, “aa-complain—Entering Complain or Learning Mode” for more detailed instructions for using learning or complain mode.

  3. Exercise your application.

    Run your application and exercise its functionality. How much to exercise the program is up to you, but you need the program to access each file representing its access needs. Because the execution is not being supervised by aa-genprof, this step can go on for days or weeks and can span complete system reboots.

  4. Analyze the log.

    In systemic profiling, run aa-logprof directly instead of letting aa-genprof run it (as in stand-alone profiling). The general form of aa-logprof is:

    aa-logprof [ -d /path/to/profiles ] [ -f /path/to/logfile ]

    Refer to Section 24.7.3.9, “aa-logprof—Scanning the System Log” for more information about using aa-logprof.

  5. Repeat Step 3 and Step 4.

    This generates optimal profiles. An iterative approach captures smaller data sets that can be trained and reloaded into the policy engine. Subsequent iterations generate fewer messages and run faster.

  6. Edit the profiles.

    You should review the profiles that have been generated. You can open and edit the profiles in /etc/apparmor.d/ using a text editor.

  7. Return to enforce mode.

    This is when the system goes back to enforcing the rules of the profiles, not only logging information. This can be done manually by removing the flags=(complain) text from the profiles or automatically by using the aa-enforce command, which works identically to the aa-complain command, except it sets the profiles to enforce mode. This functionality is also available through the YaST Profile Mode module, described in Section 23.4.2, “Changing the Mode of Individual Profiles”.

    To ensure that all profiles are taken out of complain mode and put into enforce mode, enter aa-enforce /etc/apparmor.d/*.

  8. Re-scan all profiles.

    To have AppArmor re-scan all of the profiles and change the enforcement mode in the kernel, enter systemctl reload apparmor.

24.7.3 Summary of Profiling Tools

All of the AppArmor profiling utilities are provided by the apparmor-utils RPM package and are stored in /usr/sbin. Each tool has a different purpose.

24.7.3.1 aa-autodep—Creating Approximate Profiles

This creates an approximate profile for the program or application selected. You can generate approximate profiles for binary executables and interpreted script programs. The resulting profile is called approximate because it does not necessarily contain all of the profile entries that the program needs to be properly confined by AppArmor. The minimum aa-autodep approximate profile has, at minimum, a base include directive, which contains basic profile entries needed by most programs. For certain types of programs, aa-autodep generates a more expanded profile. The profile is generated by recursively calling ldd(1) on the executables listed on the command line.

To generate an approximate profile, use the aa-autodep program. The program argument can be either the simple name of the program, which aa-autodep finds by searching your shell's path variable, or it can be a fully qualified path. The program itself can be of any type (ELF binary, shell script, Perl script, etc.). aa-autodep generates an approximate profile to improve through the dynamic profiling that follows.

The resulting approximate profile is written to the /etc/apparmor.d directory using the AppArmor profile naming convention of naming the profile after the absolute path of the program, replacing the forward slash (/) characters in the path with period (.) characters. The general syntax of aa-autodep is to enter the following in a terminal window when logged in as root:

aa-autodep [ -d /PATH/TO/PROFILES ] [PROGRAM1 PROGRAM2...]

If you do not enter the program name or names, you are prompted for them. /path/to/profiles overrides the default location of /etc/apparmor.d, should you keep profiles in a location other than the default.

To begin profiling, you must create profiles for each main executable service that is part of your application (anything that might start without being a child of another program that already has a profile). Finding all such programs depends on the application in question. Here are several strategies for finding such programs:

Directories

If all the programs to profile are in one directory and there are no other programs in that directory, the simple command aa-autodep /path/to/your/programs/* creates basic profiles for all programs in that directory.

pstree -p

You can run your application and use the standard Linux pstree command to find all processes running. Then manually hunt down the location of these programs and run the aa-autodep for each one. If the programs are in your path, aa-autodep finds them for you. If they are not in your path, the standard Linux command find might be helpful in finding your programs. Execute find / -name ' MY_APPLICATION' -print to determine an application's path (MY_APPLICATION being an example application). You may use wild cards if appropriate.

24.7.3.2 aa-complain—Entering Complain or Learning Mode

The complain or learning mode tool (aa-complain) detects violations of AppArmor profile rules, such as the profiled program accessing files not permitted by the profile. The violations are permitted, but also logged. To improve the profile, turn complain mode on, run the program through a suite of tests to generate log events that characterize the program's access needs, then postprocess the log with the AppArmor tools to transform log events into improved profiles.

Manually activating complain mode (using the command line) adds a flag to the top of the profile so that /bin/foo becomes /bin/foo flags=(complain). To use complain mode, open a terminal window and enter one of the following lines as root:

  • If the example program (PROGRAM1) is in your path, use:

    aa-complain [PROGRAM1 PROGRAM2 ...]
  • If the program is not in your path, specify the entire path as follows:

    aa-complain /sbin/PROGRAM1
  • If the profiles are not in /etc/apparmor.d, use the following to override the default location:

    aa-complain /path/to/profiles/PROGRAM1
  • Specify the profile for /sbin/program1 as follows:

    aa-complain /etc/apparmor.d/sbin.PROGRAM1

Each of the above commands activates the complain mode for the profiles or programs listed. If the program name does not include its entire path, aa-complain searches $PATH for the program. For example, aa-complain /usr/sbin/* finds profiles associated with all of the programs in /usr/sbin and puts them into complain mode. aa-complain /etc/apparmor.d/* puts all of the profiles in /etc/apparmor.d into complain mode.

Tip
Tip: Toggling Profile Mode with YaST

YaST offers a graphical front-end for toggling complain and enforce mode. See Section 23.4.2, “Changing the Mode of Individual Profiles” for information.

24.7.3.3 aa-decode—Decoding Hex-encoded Strings in AppArmor Log Files

aa-decode will decode hex-encoded strings in the AppArmor log output. It can also process the audit log on standard input, convert any hex-encoded AppArmor log entries, and display them on standard output.

24.7.3.4 aa-disable—Disabling an AppArmor Security Profile

Use aa-disable to disable the enforcement mode for one or more AppArmor profiles. This command will unload the profile from the kernel, and prevent the profile from being loaded on AppArmor start-up. Use aa-enforce or aa-complain utilities to change this behavior.

24.7.3.5 aa-easyprof—Easy Profile Generation

aa-easyprof provides an easy-to-use interface for AppArmor profile generation. aa-easyprof supports the use of templates and profile groups to quickly profile an application. While aa-easyprof can help with profile generation, its utility is dependent on the quality of the templates, profile groups and abstractions used. Also, this tool may create a profile that is less restricted than when creating a profile manually or with aa-genprof and aa-logprof.

For more information, see the man page of aa-easyprof (8).

24.7.3.6 aa-enforce—Entering Enforce Mode

The enforce mode detects violations of AppArmor profile rules, such as the profiled program accessing files not permitted by the profile. The violations are logged and not permitted. The default is for enforce mode to be enabled. To log the violations only, but still permit them, use complain mode.

Manually activating enforce mode (using the command line) removes the complain flag from the top of the profile so that /bin/foo flags=(complain) becomes /bin/foo. To use enforce mode, open a terminal window and enter one of the following lines as root.

  • If the example program (PROGRAM1) is in your path, use:

    aa-enforce [PROGRAM1 PROGRAM2 ...]
  • If the program is not in your path, specify the entire path, as follows:

    aa-enforce /sbin/PROGRAM1
  • If the profiles are not in /etc/apparmor.d, use the following to override the default location:

    aa-enforce -d /path/to/profiles/     program1
  • Specify the profile for /sbin/program1 as follows:

    aa-enforce /etc/apparmor.d/sbin.PROGRAM1

Each of the above commands activates the enforce mode for the profiles and programs listed.

If you do not enter the program or profile names, you are prompted to enter one. /path/to/profiles overrides the default location of /etc/apparmor.d.

The argument can be either a list of programs or a list of profiles. If the program name does not include its entire path, aa-enforce searches $PATH for the program.

Tip
Tip: Toggling Profile Mode with YaST

YaST offers a graphical front-end for toggling complain and enforce mode. See Section 23.4.2, “Changing the Mode of Individual Profiles” for information.

24.7.3.7 aa-exec—Confining a Program with the Specified Profile

Use aa-exec to launch a program confined by a specified profile and/or profile namespace. If both a profile and namespace are specified, the program will be confined by the profile in the new namespace. If only a profile namespace is specified, the profile name of the current confinement will be used. If neither a profile nor namespace is specified, the command will be run using the standard profile attachment—as if you did not use the aa-exec command.

For more information on the command's options, see its manual page man 8 aa-exec.

24.7.3.8 aa-genprof—Generating Profiles

aa-genprof is AppArmor's profile generating utility. It runs aa-autodep on the specified program, creating an approximate profile (if a profile does not already exist for it), sets it to complain mode, reloads it into AppArmor, marks the log, and prompts the user to execute the program and exercise its functionality. Its syntax is as follows:

aa-genprof [ -d /path/to/profiles ]  PROGRAM

To create a profile for the Apache Web server program httpd2-prefork, do the following as root:

  1. Enter systemctl stop apache2.

  2. Next, enter aa-genprof httpd2-prefork.

    Now aa-genprof does the following:

    1. Resolves the full path of httpd2-prefork using your shell's path variables. You can also specify a full path. On SUSE Linux Enterprise Desktop, the default full path is /usr/sbin/httpd2-prefork.

    2. Checks to see if there is an existing profile for httpd2-prefork. If there is one, it updates it. If not, it creates one using the aa-autodep as described in Section 24.7.3, “Summary of Profiling Tools”.

    3. Puts the profile for this program into learning or complain mode so that profile violations are logged, but are permitted to proceed. A log event looks like this (see /var/log/audit/audit.log):

      type=APPARMOR_ALLOWED msg=audit(1189682639.184:20816): \
      apparmor="DENIED" operation="file_mmap" parent=2692 \
      profile="/usr/sbin/httpd2-prefork//HANDLING_UNTRUSTED_INPUT" \
      name="/var/log/apache2/access_log-20140116" pid=28730 comm="httpd2-prefork" \
      requested_mask="::r" denied_mask="::r" fsuid=30 ouid=0

      If you are not running the audit daemon, the AppArmor events are logged directly to systemd journal (see Chapter 16, journalctl: Query the systemd Journal):

      Sep 13 13:20:30 K23 kernel: audit(1189682430.672:20810): \
      apparmor="DENIED" operation="file_mmap" parent=2692 \
      profile="/usr/sbin/httpd2-prefork//HANDLING_UNTRUSTED_INPUT" \
      name="/var/log/apache2/access_log-20140116" pid=28730 comm="httpd2-prefork" \
      requested_mask="::r" denied_mask="::r" fsuid=30 ouid=0

      They also can be viewed using the dmesg command:

      audit(1189682430.672:20810): apparmor="DENIED" \
      operation="file_mmap" parent=2692 \
      profile="/usr/sbin/httpd2-prefork//HANDLING_UNTRUSTED_INPUT" \
      name="/var/log/apache2/access_log-20140116" pid=28730 comm="httpd2-prefork" \
      requested_mask="::r" denied_mask="::r" fsuid=30 ouid=0
    4. Marks the log with a beginning marker of log events to consider. For example:

      Sep 13 17:48:52 figwit root: GenProf: e2ff78636296f16d0b5301209a04430d
  3. When prompted by the tool, run the application to profile in another terminal window and perform as many of the application functions as possible. Thus, the learning mode can log the files and directories to which the program requires access to function properly. For example, in a new terminal window, enter systemctl start apache2.

  4. Select from the following options that are available in the aa-genprof terminal window after you have executed the program function:

    • S runs aa-genprof on the system log from where it was marked when aa-genprof was started and reloads the profile. If system events exist in the log, AppArmor parses the learning mode log files. This generates a series of questions that you must answer to guide aa-genprof in generating the security profile.

    • F exits the tool.

    Note
    Note

    If requests to add hats appear, proceed to Chapter 25, Profiling Your Web Applications Using ChangeHat.

  5. Answer two types of questions:

    Each of these categories results in a series of questions that you must answer to add the resource or program to the profile. Example 24.1, “Learning Mode Exception: Controlling Access to Specific Resources” and Example 24.2, “Learning Mode Exception: Defining Permissions for an Entry” provide examples of each one. Subsequent steps describe your options in answering these questions.

    • Dealing with execute accesses is complex. You must decide how to proceed with this entry regarding which execute permission type to grant to this entry:

      Example 24.1: Learning Mode Exception: Controlling Access to Specific Resources
      Reading log entries from /var/log/audit/audit.log.
      Updating AppArmor profiles in /etc/apparmor.d.
      
      Profile:  /usr/sbin/xinetd
      Program:  xinetd
      Execute:  /usr/lib/cups/daemon/cups-lpd
      Severity: unknown
      
      (I)nherit / (P)rofile / (C)hild / (N)ame / (U)nconfined / (X)ix / (D)eny / Abo(r)t / (F)inish
      Inherit (ix)

      The child inherits the parent's profile, running with the same access controls as the parent. This mode is useful when a confined program needs to call another confined program without gaining the permissions of the target's profile or losing the permissions of the current profile. This mode is often used when the child program is a helper application, such as the /usr/bin/mail client using less as a pager.

      Profile (px/Px)

      The child runs using its own profile, which must be loaded into the kernel. If the profile is not present, attempts to execute the child fail with permission denied. This is most useful if the parent program is invoking a global service, such as DNS lookups or sending mail with your system's MTA.

      Choose the profile with clean exec (Px) option to scrub the environment of environment variables that could modify execution behavior when passed to the child process.

      Child (cx/Cx)

      Sets up a transition to a subprofile. It is like px/Px transition, except to a child profile.

      Choose the profile with clean exec (Cx) option to scrub the environment of environment variables that could modify execution behavior when passed to the child process.

      Unconfined (ux/Ux)

      The child runs completely unconfined without any AppArmor profile applied to the executed resource.

      Choose the unconfined with clean exec (Ux) option to scrub the environment of environment variables that could modify execution behavior when passed to the child process. Note that running unconfined profiles introduces a security vulnerability that could be used to evade AppArmor. Only use it as a last resort.

      mmap (m)

      This permission denotes that the program running under the profile can access the resource using the mmap system call with the flag PROT_EXEC. This means that the data mapped in it can be executed. You are prompted to include this permission if it is requested during a profiling run.

      Deny

      Adds a deny rule to the profile, and permanently prevents the program from accessing the specified directory path entries. AppArmor then continues to the next event.

      Abort

      Aborts aa-logprof, losing all rule changes entered so far and leaving all profiles unmodified.

      Finish

      Closes aa-logprof, saving all rule changes entered so far and modifying all profiles.

    • Example 24.2, “Learning Mode Exception: Defining Permissions for an Entry” shows AppArmor suggest allowing a globbing pattern /var/run/nscd/* for reading, then using an abstraction to cover common Apache-related access rules.

      Example 24.2: Learning Mode Exception: Defining Permissions for an Entry
      Profile:  /usr/sbin/httpd2-prefork
      Path:     /var/run/nscd/dbSz9CTr
      Mode:     r
      Severity: 3
      
        1 - /var/run/nscd/dbSz9CTr
       [2 - /var/run/nscd/*]
      
      (A)llow / [(D)eny] / (G)lob / Glob w/(E)xt / (N)ew / Abo(r)t / (F)inish / (O)pts
      Adding /var/run/nscd/* r to profile.
      
      Profile:  /usr/sbin/httpd2-prefork
      Path:     /proc/11769/attr/current
      Mode:     w
      Severity: 9
      
       [1 - #include <abstractions/apache2-common>]
        2 - /proc/11769/attr/current
        3 - /proc/*/attr/current
      
      (A)llow / [(D)eny] / (G)lob / Glob w/(E)xt / (N)ew / Abo(r)t / (F)inish / (O)pts
      Adding #include <abstractions/apache2-common> to profile.

      AppArmor provides one or more paths or includes. By entering the option number, select the desired options then proceed to the next step.

      Note
      Note

      Not all of these options are always presented in the AppArmor menu.

      #include

      This is the section of an AppArmor profile that refers to an include file, which procures access permissions for programs. By using an include, you can give the program access to directory paths or files that are also required by other programs. Using includes can reduce the size of a profile. It is good practice to select includes when suggested.

      Globbed Version

      This is accessed by selecting Glob as described in the next step. For information about globbing syntax, refer to Section 21.6, “Profile Names, Flags, Paths, and Globbing”.

      Actual Path

      This is the literal path to which the program needs access so that it can run properly.

      After you select the path or include, process it as an entry into the AppArmor profile by selecting Allow or Deny. If you are not satisfied with the directory path entry as it is displayed, you can also Glob it.

      The following options are available to process the learning mode entries and build the profile:

      Select Enter

      Allows access to the selected directory path.

      Allow

      Allows access to the specified directory path entries. AppArmor suggests file permission access. For more information, refer to Section 21.7, “File Permission Access Modes”.

      Deny

      Prevents the program from accessing the specified directory path entries. AppArmor then continues to the next event.

      New

      Prompts you to enter your own rule for this event, allowing you to specify a regular expression. If the expression does not actually satisfy the event that prompted the question in the first place, AppArmor asks for confirmation and lets you reenter the expression.

      Glob

      Select a specific path or create a general rule using wild cards that match a broader set of paths. To select any of the offered paths, enter the number that is printed in front of the path then decide how to proceed with the selected item.

      For more information about globbing syntax, refer to Section 21.6, “Profile Names, Flags, Paths, and Globbing”.

      Glob w/Ext

      This modifies the original directory path while retaining the file name extension. For example, /etc/apache2/file.ext becomes /etc/apache2/*.ext, adding the wild card (asterisk) in place of the file name. This allows the program to access all files in the suggested directory that end with the .ext extension.

      Abort

      Aborts aa-logprof, losing all rule changes entered so far and leaving all profiles unmodified.

      Finish

      Closes aa-logprof, saving all rule changes entered so far and modifying all profiles.

  6. To view and edit your profile using vi, enter vi /etc/apparmor.d/ PROFILENAME in a terminal window. To enable syntax highlighting when editing an AppArmor profile in vim, use the commands :syntax on then :set syntax=apparmor. For more information about vim and syntax highlighting, refer to Section 24.7.3.14, “apparmor.vim”.

  7. Restart AppArmor and reload the profile set including the newly created one using the systemctl reload apparmor command.

Like the graphical front-end for building AppArmor profiles, the YaST Add Profile Wizard, aa-genprof also supports the use of the local profile repository under /etc/apparmor/profiles/extras and the remote AppArmor profile repository.

To use a profile from the local repository, proceed as follows:

  1. Start aa-genprof as described above.

    If aa-genprof finds an inactive local profile, the following lines appear on your terminal window:

    Profile: /usr/bin/opera
    
     [1 - Inactive local profile for /usr/bin/opera]
    
    [(V)iew Profile] / (U)se Profile / (C)reate New Profile / Abo(r)t / (F)inish
  2. To use this profile, press U (Use Profile) and follow the profile generation procedure outlined above.

    To examine the profile before activating it, press V (View Profile).

    To ignore the existing profile, press C (Create New Profile) and follow the profile generation procedure outlined above to create the profile from scratch.

  3. Leave aa-genprof by pressing F (Finish) when you are done and save your changes.

24.7.3.9 aa-logprof—Scanning the System Log

aa-logprof is an interactive tool used to review the complain and enforce mode events found in the log entries in /var/log/audit/audit.log, or directly in the systemd journal (see Chapter 16, journalctl: Query the systemd Journal), and generate new entries in AppArmor security profiles.

When you run aa-logprof, it begins to scan the log files produced in complain and enforce mode and, if there are new security events that are not covered by the existing profile set, it gives suggestions for modifying the profile. aa-logprof uses this information to observe program behavior.

If a confined program forks and executes another program, aa-logprof sees this and asks the user which execution mode should be used when launching the child process. The execution modes ix, px, Px, ux, Ux, cx, Cx, and named profiles, are options for starting the child process. If a separate profile exists for the child process, the default selection is Px. If one does not exist, the profile defaults to ix. Child processes with separate profiles have aa-autodep run on them and are loaded into AppArmor, if it is running.

When aa-logprof exits, profiles are updated with the changes. If AppArmor is active, the updated profiles are reloaded and, if any processes that generated security events are still running in the null-XXXX profiles (unique profiles temporarily created in complain mode), those processes are set to run under their proper profiles.

To run aa-logprof, enter aa-logprof into a terminal window while logged in as root. The following options can be used for aa-logprof:

aa-logprof -d/path/to/profile/directory/

Specifies the full path to the location of the profiles if the profiles are not located in the standard directory, /etc/apparmor.d/.

aa-logprof -f/path/to/logfile/

Specifies the full path to the location of the log file if the log file is not located in the default directory or /var/log/audit/audit.log.

aa-logprof -m "string marker in logfile"

Marks the starting point for aa-logprof to look in the system log. aa-logprof ignores all events in the system log before the specified mark. If the mark contains spaces, it must be surrounded by quotes to work correctly. For example:

aa-logprof -m "17:04:21"

or

aa-logprof -m e2ff78636296f16d0b5301209a04430d

aa-logprof scans the log, asking you how to handle each logged event. Each question presents a numbered list of AppArmor rules that can be added by pressing the number of the item on the list.

By default, aa-logprof looks for profiles in /etc/apparmor.d/. Often running aa-logprof as root is enough to update the profile. However, there might be times when you need to search archived log files, such as if the program exercise period exceeds the log rotation window (when the log file is archived and a new log file is started). If this is the case, you can enter zcat -f `ls -1tr /path/to/logfile*` | aa-logprof -f -.

24.7.3.10 aa-logprof Example 1

The following is an example of how aa-logprof addresses httpd2-prefork accessing the file /etc/group. [] indicates the default option.

In this example, the access to /etc/group is part of httpd2-prefork accessing name services. The appropriate response is 1, which includes a predefined set of AppArmor rules. Selecting 1 to #include the name service package resolves all of the future questions pertaining to DNS lookups and makes the profile less brittle in that any changes to DNS configuration and the associated name service profile package can be made once, rather than needing to revise many profiles.

Profile:  /usr/sbin/httpd2-prefork
Path:     /etc/group
New Mode: r

[1 - #include <abstractions/nameservice>]
 2 - /etc/group
[(A)llow] / (D)eny / (N)ew / (G)lob / Glob w/(E)xt / Abo(r)t / (F)inish

Select one of the following responses:

Select Enter

Triggers the default action, which is, in this example, allowing access to the specified directory path entry.

Allow

Allows access to the specified directory path entries. AppArmor suggests file permission access. For more information about this, refer to Section 21.7, “File Permission Access Modes”.

Deny

Permanently prevents the program from accessing the specified directory path entries. AppArmor then continues to the next event.

New

Prompts you to enter your own rule for this event, allowing you to specify whatever form of regular expression you want. If the expression entered does not actually satisfy the event that prompted the question in the first place, AppArmor asks for confirmation and lets you reenter the expression.

Glob

Select either a specific path or create a general rule using wild cards that matches on a broader set of paths. To select any of the offered paths, enter the number that is printed in front of the paths then decide how to proceed with the selected item.

For more information about globbing syntax, refer to Section 21.6, “Profile Names, Flags, Paths, and Globbing”.

Glob w/Ext

This modifies the original directory path while retaining the file name extension. For example, /etc/apache2/file.ext becomes /etc/apache2/*.ext, adding the wild card (asterisk) in place of the file name. This allows the program to access all files in the suggested directory that end with the .ext extension.

Abort

Aborts aa-logprof, losing all rule changes entered so far and leaving all profiles unmodified.

Finish

Closes aa-logprof, saving all rule changes entered so far and modifying all profiles.

24.7.3.11 aa-logprof Example 2

For example, when profiling vsftpd, see this question:

Profile:  /usr/sbin/vsftpd
Path:     /y2k.jpg

New Mode: r

[1 - /y2k.jpg]

(A)llow / [(D)eny] / (N)ew / (G)lob / Glob w/(E)xt / Abo(r)t / (F)inish

Several items of interest appear in this question. First, note that vsftpd is asking for a path entry at the top of the tree, even though vsftpd on SUSE Linux Enterprise Desktop serves FTP files from /srv/ftp by default. This is because vsftpd uses chroot and, for the portion of the code inside the chroot jail, AppArmor sees file accesses in terms of the chroot environment rather than the global absolute path.

The second item of interest is that you should grant FTP read access to all JPEG files in the directory, so you could use Glob w/Ext and use the suggested path of /*.jpg. Doing so collapses all previous rules granting access to individual .jpg files and forestalls any future questions pertaining to access to .jpg files.

Finally, you should grant more general access to FTP files. If you select Glob in the last entry, aa-logprof replaces the suggested path of /y2k.jpg with /*. Alternatively, you should grant even more access to the entire directory tree, in which case you could use the New path option and enter /**.jpg (which would grant access to all .jpg files in the entire directory tree) or /** (which would grant access to all files in the directory tree).

These items deal with read accesses. Write accesses are similar, except that it is good policy to be more conservative in your use of regular expressions for write accesses. Dealing with execute accesses is more complex. Find an example in Example 24.1, “Learning Mode Exception: Controlling Access to Specific Resources”.

In the following example, the /usr/bin/mail mail client is being profiled and aa-logprof has discovered that /usr/bin/mail executes /usr/bin/less as a helper application to page long mail messages. Consequently, it presents this prompt:

/usr/bin/nail -> /usr/bin/less
(I)nherit / (P)rofile / (C)hild / (N)ame / (U)nconfined / (X)ix / (D)eny
Note
Note

The actual executable file for /usr/bin/mail turns out to be /usr/bin/nail, which is not a typographical error.

The program /usr/bin/less appears to be a simple one for scrolling through text that is more than one screen long and that is in fact what /usr/bin/mail is using it for. However, less is actually a large and powerful program that uses many other helper applications, such as tar and rpm.

Tip
Tip

Run less on a tar file or an RPM file and it shows you the inventory of these containers.

You do not want to run rpm automatically when reading mail messages (that leads directly to a Microsoft* Outlook–style virus attack, because RPM has the power to install and modify system programs), so, in this case, the best choice is to use Inherit. This results in the less program executed from this context running under the profile for /usr/bin/mail. This has two consequences:

  • You need to add all of the basic file accesses for /usr/bin/less to the profile for /usr/bin/mail.

  • You can avoid adding the helper applications, such as tar and rpm, to the /usr/bin/mail profile so that when /usr/bin/mail runs /usr/bin/less in this context, the less program is far less dangerous than it would be without AppArmor protection. Another option is to use the Cx execute modes. For more information on execute modes, see Section 21.8, “Execute Modes”.

In other circumstances, you might instead want to use the Profile option. This has the following effects on aa-logprof:

  • The rule written into the profile uses px/Px, which forces the transition to the child's own profile.

  • aa-logprof constructs a profile for the child and starts building it, in the same way that it built the parent profile, by assigning events for the child process to the child's profile and asking the aa-logprof user questions. The profile will also be applied if you run the child as a stand-alone program.

If a confined program forks and executes another program, aa-logprof sees this and asks the user which execution mode should be used when launching the child process. The execution modes of inherit, profile, unconfined, child, named profile, or an option to deny the execution are presented.

If a separate profile exists for the child process, the default selection is profile. If a profile does not exist, the default is inherit. The inherit option, or ix, is described in Section 21.7, “File Permission Access Modes”.

The profile option indicates that the child program should run in its own profile. A secondary question asks whether to sanitize the environment that the child program inherits from the parent. If you choose to sanitize the environment, this places the execution modifier Px in your AppArmor profile. If you select not to sanitize, px is placed in the profile and no environment sanitizing occurs. The default for the execution mode is Px if you select profile execution mode.

The unconfined execution mode is not recommended and should only be used in cases where there is no other option to generate a profile for a program reliably. Selecting unconfined opens a warning dialog asking for confirmation of the choice. If you are sure and choose Yes, a second dialog ask whether to sanitize the environment. To use the execution mode Ux in your profile, select Yes. To use the execution mode ux in your profile instead, select No. The default value selected is Ux for unconfined execution mode.

Important
Important: Running Unconfined

Selecting ux or Ux is very dangerous and provides no enforcement of policy (from a security perspective) of the resulting execution behavior of the child program.

24.7.3.12 aa-unconfined—Identifying Unprotected Processes

The aa-unconfined command examines open network ports on your system, compares that to the set of profiles loaded on your system, and reports network services that do not have AppArmor profiles. It requires root privileges and that it not be confined by an AppArmor profile.

aa-unconfined must be run as root to retrieve the process executable link from the /proc file system. This program is susceptible to the following race conditions:

  • An unlinked executable is mishandled

  • A process that dies between netstat(8) and further checks is mishandled

Note
Note

This program lists processes using TCP and UDP only. In short, this program is unsuitable for forensics use and is provided only as an aid to profiling all network-accessible processes in the lab.

24.7.3.13 aa-notify

aa-notify is a handy utility that displays AppArmor notifications in your desktop environment. This is very convenient if you do not want to inspect the AppArmor log file, but rather let the desktop inform you about events that violate the policy. To enable AppArmor desktop notifications, run aa-notify:

sudo aa-notify -p -u USERNAME --display DISPLAY_NUMBER

where USERNAME is your user name under which you are logged in, and DISPLAY_NUMBER is the X Window display number you are currently using, such as :0. The process is run in the background, and shows a notification each time a deny event happens.

Tip
Tip

The active X Window display number is saved in the $DISPLAY variable, so you can use --display $DISPLAY to avoid finding out the current display number.

aa-notify Message in GNOME
Figure 24.1: aa-notify Message in GNOME

With the -s DAYS option, you can also configure aa-notify to display a summary of notifications for the specified number of past days. For more information on aa-notify, see its man page man 8 aa-notify.

24.7.3.14 apparmor.vim

A syntax highlighting file for the vim text editor highlights various features of an AppArmor profile with colors. Using vim and the AppArmor syntax mode for vim, you can see the semantic implications of your profiles with color highlighting. Use vim to view and edit your profile by typing vim at a terminal window.

To enable the syntax coloring when you edit an AppArmor profile in vim, use the commands :syntax on then :set syntax=apparmor. To make sure vim recognizes the edited file type correctly as an AppArmor profile, add

# vim:ft=apparmor

at the end of the profile.

Tip
Tip

vim comes with AppArmor highlighting automatically enabled for files in /etc/apparmor.d/.

When you enable this feature, vim colors the lines of the profile for you:

Blue

Comments

White

Ordinary read access lines

Brown

Capability statements and complain flags

Yellow

Lines that grant write access

Green

Lines that grant execute permission (either ix or px)

Red

Lines that grant unconfined access (ux)

Red background

Syntax errors that will not load properly into the AppArmor modules

Use the apparmor.vim and vim man pages and the :help syntax from within the vim editor for further vim help about syntax highlighting. The AppArmor syntax is stored in /usr/share/vim/current/syntax/apparmor.vim.

24.8 Important File Names and Directories

The following list contains the most important files and directories used by the AppArmor framework. If you intend to manage and troubleshoot your profiles manually, make sure that you know about these files and directories:

/sys/kernel/security/apparmor/profiles

Virtualized file representing the currently loaded set of profiles.

/etc/apparmor/

Location of AppArmor configuration files.

/etc/apparmor/profiles/extras/

A local repository of profiles shipped with AppArmor, but not enabled by default.

/etc/apparmor.d/

Location of profiles, named with the convention of replacing the / in paths with . (not for the root /) so profiles are easier to manage. For example, the profile for the program /usr/sbin/ntpd is named usr.sbin.ntpd.

/etc/apparmor.d/abstractions/

Location of abstractions.

/etc/apparmor.d/program-chunks/

Location of program chunks.

/proc/*/attr/current

Check this file to review the confinement status of a process and the profile that is used to confine the process. The ps auxZ command retrieves this information automatically.

25 Profiling Your Web Applications Using ChangeHat

  • Filename: apparmor_changehat.xml
  • ID: cha.apparmor.hat

An AppArmor® profile represents the security policy for an individual program instance or process. It applies to an executable program, but if a portion of the program needs different access permissions than other portions, the program can change hats to use a different security context, distinctive from the access of the main program. This is known as a hat or subprofile.

ChangeHat enables programs to change to or from a hat within an AppArmor profile. It enables you to define security at a finer level than the process. This feature requires that each application be made ChangeHat-aware, meaning that it is modified to make a request to the AppArmor module to switch security domains at specific times during the application execution. One example of a ChangeHat-aware application is the Apache Web server.

A profile can have an arbitrary number of subprofiles, but there are only two levels: a subprofile cannot have further child profiles. A subprofile is written as a separate profile. Its name consists of the name of the containing profile followed by the subprofile name, separated by a ^.

Subprofiles are either stored in the same file as the parent profile, or in a separate file. The latter case is recommended on sites with many hats—it allows the policy caching to handle changes at the per hat level. If all the hats are in the same file as the parent profile, then the parent profile and all hats must be recompiled.

An external subprofile that is going to be used as a hat, must begin with the word hat or the ^ character.

The following two subprofiles cannot be used as a hat:

/foo//bar { }

or

profile /foo//bar { }

While the following two are treated as hats:

^/foo//bar { }

or

hat /foo//bar { } # this syntax is not highlighted in vim

Note that the security of hats is considerably weaker than that of full profiles. Using certain types of bugs in a program, an attacker may be able to escape from a hat into the containing profile. This is because the security of hats is determined by a secret key handled by the containing process, and the code running in the hat must not have access to the key. Thus, change_hat is most useful with application servers, where a language interpreter (such as PERL, PHP, or Java) is isolating pieces of code such that they do not have direct access to the memory of the containing process.

The rest of this chapter describes using change_hat with Apache, to contain Web server components run using mod_perl and mod_php. Similar approaches can be used with any application server by providing an application module similar to the mod_apparmor described next in Section 25.1.2, “Location and Directory Directives”.

Tip
Tip: For More Information

For more information, see the change_hat man page.

25.1 Configuring Apache for mod_apparmor

AppArmor provides a mod_apparmor module (package apache2-mod-apparmor) for the Apache program (only included in SUSE Linux Enterprise Server). This module makes the Apache Web server ChangeHat aware. Install it along with Apache.

When Apache is ChangeHat-aware, it checks for the following customized AppArmor security profiles in the order given for every URI request that it receives.

  • URI-specific hat. For example, ^www_app_name/templates/classic/images/bar_left.gif

  • DEFAULT_URI

  • HANDLING_UNTRUSTED_INPUT

Note
Note: Apache Configuration

If you install apache2-mod-apparmor, make sure the module is enabled, and then restart Apache by executing the following command:

a2enmod apparmor && sudo systemctl reload apache2

Apache is configured by placing directives in plain text configuration files. The main configuration file is usually /etc/apache2/httpd.conf. When you compile Apache, you can indicate the location of this file. Directives can be placed in any of these configuration files to alter the way Apache behaves. When you make changes to the main configuration files, you need to reload Apache with sudo systemctl reload apache2, so the changes are recognized.

25.1.1 Virtual Host Directives

<VirtualHost> and </VirtualHost> directives are used to enclose a group of directives that will apply only to a particular virtual host. For more information on Apache virtual host directives, refer to http://httpd.apache.org/docs/2.4/en/mod/core.html#virtualhost.

The ChangeHat-specific configuration keyword is AADefaultHatName. It is used similarly to AAHatName, for example, AADefaultHatName My_Funky_Default_Hat.

It allows you to specify a default hat to be used for virtual hosts and other Apache server directives, so that you can have different defaults for different virtual hosts. This can be overridden by the AAHatName directive and is checked for only if there is not a matching AAHatName or hat named by the URI. If the AADefaultHatName hat does not exist, it falls back to the DEFAULT_URI hat if it exists/

If none of those are matched, it goes back to the parent Apache hat.

25.1.2 Location and Directory Directives

Location and directory directives specify hat names in the program configuration file so the Apache calls the hat regarding its security. For Apache, you can find documentation about the location and directory directives at http://httpd.apache.org/docs/2.4/en/sections.html.

The location directive example below specifies that, for a given location, mod_apparmor should use a specific hat:

<Location /foo/>
  AAHatName MY_HAT_NAME
</Location>

This tries to use MY_HAT_NAME for any URI beginning with /foo/ (/foo/, /foo/bar, /foo/cgi/path/blah_blah/blah, etc.).

The directory directive works similarly to the location directive, except it refers to a path in the file system as in the following example:

<Directory "/srv/www/www.example.org/docs">
  # Note lack of trailing slash
  AAHatName example.org
</Directory>

25.2 Managing ChangeHat-Aware Applications

In the previous section you learned about mod_apparmor and the way it helps you to secure a specific Web application. This section walks you through a real-life example of creating a hat for a Web application, and using AppArmor's change_hat feature to secure it. Note that this chapter focuses on AppArmor's command line tools, as YaST's AppArmor module has limited functionality.

25.2.1 With AppArmor's Command Line Tools

For illustration purposes, let us choose the Web application called Adminer (http://www.adminer.org/en/). It is a full-featured SQL database management tool written in PHP, yet consisting of a single PHP file. For Adminer to work, you need to set up an Apache Web server, PHP and its Apache module, and one of the database drivers available for PHP—MariaDB in this example. You can install the required packages with

zypper in apache2 apache2-mod_apparmor apache2-mod_php5 php5 php5-mysql

To set up the Web environment for running Adminer, follow these steps:

Procedure 25.1: Setting Up a Web Server Environment
  1. Make sure apparmor and php5 modules are enabled for Apache. To enable the modules in any case, use:

    a2enmod apparmor php5

    and then restart Apache with

    sudo systemctl restart apache2
  2. Make sure MariaDB is running. If unsure, restart it with

    sudo systemctl restart mysql
  3. Download Adminer from http://www.adminer.org, copy it to /srv/www/htdocs/adminer/, and rename it to adminer.php, so that its full path is /srv/www/htdocs/adminer/adminer.php.

  4. Test Adminer in your Web browser by entering http://localhost/adminer/adminer.php in its URI address field. If you installed Adminer to a remote server, replace localhost with the real host name of the server.

    Adminer Login Page
    Figure 25.1: Adminer Login Page
    Tip
    Tip

    If you encounter problems viewing the Adminer login page, try to look for help in the Apache error log /var/log/apache2/error.log. Another reason you cannot access the Web page may be that your Apache is already under AppArmor control and its AppArmor profile is too tight to permit viewing Adminer. Check it with aa-status, and if needed, set Apache temporarily in complain mode with

    aa-complain usr.sbin.httpd2-prefork

After the Web environment for Adminer is ready, you need to configure Apache's mod_apparmor, so that AppArmor can detect accesses to Adminer and change to the specific hat.

Procedure 25.2: Configuring mod_apparmor
  1. Apache has several configuration files under /etc/apache2/ and /etc/apache2/conf.d/. Choose your preferred one and open it in a text editor. In this example, the vim editor is used to create a new configuration file /etc/apache2/conf.d/apparmor.conf.

    vim /etc/apache2/conf.d/apparmor.conf
  2. Copy the following snippet into the edited file.

    <Directory /srv/www/htdocs/adminer>
      AAHatName adminer
    </Directory>

    It tells Apache to let AppArmor know about a change_hat event when the Web user accesses the directory /adminer (and any file/directory inside) in Apache's document root. Remember, we placed the adminer.php application there.

  3. Save the file, close the editor, and restart Apache with

    sudo systemctl restart apache2

Apache now knows about our Adminer and changing a hat for it. It is time to create the related hat for Adminer in the AppArmor configuration. If you do not have an AppArmor profile yet, create one before proceeding. Remember that if your Apache's main binary is /usr/sbin/httpd2-prefork, then the related profile is named /etc/apparmor.d/usr.sbin.httpd2-prefork.

Procedure 25.3: Creating a Hat for Adminer
  1. Open (or create one if it does not exist) the file /etc/apparmor.d/usr.sbin.httpd2-prefork in a text editor. Its contents should be similar to the following:

    #include <tunables/global>
    
    /usr/sbin/httpd2-prefork {
      #include <abstractions/apache2-common>
      #include <abstractions/base>
      #include <abstractions/php5>
    
      capability kill,
      capability setgid,
      capability setuid,
    
      /etc/apache2/** r,
      /run/httpd.pid rw,
      /usr/lib{,32,64}/apache2*/** mr,
      /var/log/apache2/** rw,
    
      ^DEFAULT_URI {
        #include <abstractions/apache2-common>
        /var/log/apache2/** rw,
      }
    
      ^HANDLING_UNTRUSTED_INPUT {
        #include <abstractions/apache2-common>
        /var/log/apache2/** w,
      }
    }
  2. Before the last closing curly bracket (}), insert the following section:

    ^adminer flags=(complain) {
    }

    Note the (complain) addition after the hat name—it tells AppArmor to leave the adminer hat in complain mode. That is because we need to learn the hat profile by accessing Adminer later on.

  3. Save the file, and then restart AppArmor, then Apache.

    systemctl reload apparmor apache2
  4. Check if the adminer hat really is in complain mode.

    # aa-status
    apparmor module is loaded.
    39 profiles are loaded.
    37 profiles are in enforce mode.
    [...]
       /usr/sbin/httpd2-prefork
       /usr/sbin/httpd2-prefork//DEFAULT_URI
       /usr/sbin/httpd2-prefork//HANDLING_UNTRUSTED_INPUT
    [...]
    2 profiles are in complain mode.
       /usr/bin/getopt
       /usr/sbin/httpd2-prefork//adminer
    [...]

    As we can see, the httpd2-prefork//adminer is loaded in complain mode.

Our last task is to find out the right set of rules for the adminer hat. That is why we set the adminer hat into complain mode—the logging facility collects useful information about the access requirements of adminer.php as we use it via the Web browser. aa-logprof then helps us with creating the hat's profile.

Procedure 25.4: Generating Rules for the adminer Hat
  1. Open Adminer in the Web browser. If you installed it locally, then the URI is http://localhost/adminer/adminer.php.

  2. Choose the database engine you want to use (MariaDB in our case), and log in to Adminer using the existing database user name and password. You do not need to specify the database name as you can do so after logging in. Perform any operations with Adminer you like—create a new database, create a new table for it, set user privileges, and so on.

  3. After the short testing of Adminer's user interface, switch back to console and examine the log for collected data.

    # aa-logprof
    Reading log entries from /var/log/messages.
    Updating AppArmor profiles in /etc/apparmor.d.
    Complain-mode changes:
    
    Profile:  /usr/sbin/httpd2-prefork^adminer
    Path:     /dev/urandom
    Mode:     r
    Severity: 3
    
      1 - #include <abstractions/apache2-common>
    [...]
     [8 - /dev/urandom]
    
    [(A)llow] / (D)eny / (G)lob / Glob w/(E)xt / (N)ew / Abo(r)t / (F)inish / (O)pts

    From the aa-logprof message, it is clear that our new adminer hat was correctly detected:

    Profile:  /usr/sbin/httpd2-prefork^adminer

    The aa-logprof command will ask you to pick the right rule for each discovered AppArmor event. Specify the one you want to use, and confirm with Allow. For more information on working with the aa-genprof and aa-logprof interface, see Section 24.7.3.8, “aa-genprof—Generating Profiles”.

    Tip
    Tip

    aa-logprof usually offers several valid rules for the examined event. Some are abstractions—predefined sets of rules affecting a specific common group of targets. Sometimes it is useful to include such an abstraction instead of a direct URI rule:

     1 - #include <abstractions/php5>
     [2 - /var/lib/php5/sess_3jdmii9cacj1e3jnahbtopajl7p064ai242]

    In the example above, it is recommended hitting 1 and confirming with A to allow the abstraction.

  4. After the last change, you will be asked to save the changed profile.

    The following local profiles were changed. Would you like to save them?
     [1 - /usr/sbin/httpd2-prefork]
    
     (S)ave Changes / [(V)iew Changes] / Abo(r)t

    Hit S to save the changes.

  5. Set the profile to enforce mode with aa-enforce

    aa-enforce usr.sbin.httpd2-prefork

    and check its status with aa-status

    # aa-status
    apparmor module is loaded.
    39 profiles are loaded.
    38 profiles are in enforce mode.
    [...]
       /usr/sbin/httpd2-prefork
       /usr/sbin/httpd2-prefork//DEFAULT_URI
       /usr/sbin/httpd2-prefork//HANDLING_UNTRUSTED_INPUT
       /usr/sbin/httpd2-prefork//adminer
    [...]

    As you can see, the //adminer hat jumped from complain to enforce mode.

  6. Try to run Adminer in the Web browser, and if you encounter problems running it, switch it to the complain mode, repeat the steps that previously did not work well, and update the profile with aa-logprof until you are satisfied with the application's functionality.

Note
Note: Hat and Parent Profile Relationship

The profile ^adminer is only available in the context of a process running under the parent profile usr.sbin.httpd2-prefork.

25.2.2 Adding Hats and Entries to Hats in YaST

When you use the Edit Profile dialog (for instructions, refer to Section 23.2, “Editing Profiles”) or when you add a new profile using Manually Add Profile (for instructions, refer to Section 23.1, “Manually Adding a Profile”), you are given the option of adding hats (subprofiles) to your AppArmor profiles. Add a ChangeHat subprofile from the AppArmor Profile Dialog window as in the following.

AppArmor profile dialog
  1. From the AppArmor Profile Dialog window, click Add Entry then select Hat. The EnterHat Name dialog opens:

    Enter hat name
  2. Enter the name of the hat to add to the AppArmor profile. The name is the URI that, when accessed, receives the permissions set in the hat.

  3. Click Create Hat. You are returned to the AppArmor Profile Dialog screen.

  4. After adding the new hat, click Done.

26 Confining Users with pam_apparmor

  • Filename: apparmor_pam.xml
  • ID: cha.apparmor.pam

An AppArmor profile applies to an executable program; if a portion of the program needs different access permissions than other portions need, the program can change hats via change_hat to a different role, also known as a subprofile. The pam_apparmor PAM module allows applications to confine authenticated users into subprofiles based on group names, user names, or a default profile. To accomplish this, pam_apparmor needs to be registered as a PAM session module.

The package pam_apparmor is not installed by default, you can install it using YaST or zypper. Details about how to set up and configure pam_apparmor can be found in /usr/share/doc/packages/pam_apparmor/README after the package has been installed. For details on PAM, refer to Chapter 2, Authentication with PAM.

27 Managing Profiled Applications

  • Filename: apparmor_managing.xml
  • ID: cha.apparmor.managing

After creating profiles and immunizing your applications, SUSE® Linux Enterprise Desktop becomes more efficient and better protected as long as you perform AppArmor® profile maintenance (which involves analyzing log files, refining your profiles, backing up your set of profiles and keeping it up-to-date). You can deal with these issues before they become a problem by setting up event notification by e-mail, updating profiles from system log entries by running the aa-logprof tool, and dealing with maintenance issues.

27.1 Reacting to Security Event Rejections

When you receive a security event rejection, examine the access violation and determine if that event indicated a threat or was part of normal application behavior. Application-specific knowledge is required to make the determination. If the rejected action is part of normal application behavior, run aa-logprof at the command line.

If the rejected action is not part of normal application behavior, this access should be considered a possible intrusion attempt (that was prevented) and this notification should be passed to the person responsible for security within your organization.

27.2 Maintaining Your Security Profiles

In a production environment, you should plan on maintaining profiles for all of the deployed applications. The security policies are an integral part of your deployment. You should plan on taking steps to back up and restore security policy files, plan for software changes, and allow any needed modification of security policies that your environment dictates.

27.2.1 Backing Up Your Security Profiles

Backing up profiles might save you from having to re-profile all your programs after a disk crash. Also, if profiles are changed, you can easily restore previous settings by using the backed up files.

Back up profiles by copying the profile files to a specified directory.

  1. You should first archive the files into one file. To do this, open a terminal window and enter the following as root:

    tar zclpf profiles.tgz /etc/apparmor.d

    The simplest method to ensure that your security policy files are regularly backed up is to include the directory /etc/apparmor.d in the list of directories that your backup system archives.

  2. You can also use scp or a file manager like Nautilus to store the files on some kind of storage media, the network, or another computer.

27.2.2 Changing Your Security Profiles

Maintenance of security profiles includes changing them if you decide that your system requires more or less security for its applications. To change your profiles in AppArmor, refer to Section 23.2, “Editing Profiles”.

27.2.3 Introducing New Software into Your Environment

When you add a new application version or patch to your system, you should always update the profile to fit your needs. You have several options, depending on your company's software deployment strategy. You can deploy your patches and upgrades into a test or production environment. The following explains how to do this with each method.

If you intend to deploy a patch or upgrade in a test environment, the best method for updating your profiles is to run aa-logprof in a terminal as root. For detailed instructions, refer to Section 24.7.3.9, “aa-logprof—Scanning the System Log”.

If you intend to deploy a patch or upgrade directly into a production environment, the best method for updating your profiles is to monitor the system frequently to determine if any new rejections should be added to the profile and update as needed using aa-logprof. For detailed instructions, refer to Section 24.7.3.9, “aa-logprof—Scanning the System Log”.

28 Support

  • Filename: apparmor_support.xml
  • ID: cha.apparmor.support

This chapter outlines maintenance-related tasks. Learn how to update AppArmor® and get a list of available man pages providing basic help for using the command line tools provided by AppArmor. Use the troubleshooting section to learn about some common problems encountered with AppArmor and their solutions. Report defects or enhancement requests for AppArmor following the instructions in this chapter.

28.1 Updating AppArmor Online

Updates for AppArmor packages are provided in the same way as any other update for SUSE Linux Enterprise Desktop. Retrieve and apply them exactly like for any other package that ships as part of SUSE Linux Enterprise Desktop.

28.2 Using the Man Pages

There are man pages available for your use. In a terminal, enter man apparmor to open the AppArmor man page. Man pages are distributed in sections numbered 1 through 8. Each section is specific to a category of documentation:

Table 28.1: Man Pages: Sections and Categories

Section

Category

1

User commands

2

System calls

3

Library functions

4

Device driver information

5

Configuration file formats

6

Games

7

High level concepts

8

Administrator commands

The section numbers are used to distinguish man pages from each other. For example, exit(2) describes the exit system call, while exit(3) describes the exit C library function.

The AppArmor man pages are:

  • aa-audit(8)

  • aa-autodep(8)

  • aa-complain(8)

  • aa-decode(8)

  • aa-disable(8)

  • aa-easyprof(8)

  • aa-enforce(8)

  • aa-enxec(8)

  • aa-genprof(8)

  • aa-logprof(8)

  • aa-notify(8)

  • aa-status(8)

  • aa-unconfined(8)

  • aa_change_hat(8)

  • logprof.conf(5)

  • apparmor.d(5)

  • apparmor.vim(5)

  • apparmor(7)

  • apparmor_parser(8)

  • apparmor_status(8)

28.3 For More Information

Find more information about the AppArmor product at: http://wiki.apparmor.net. Find the product documentation for AppArmor in the installed system at /usr/share/doc/manual.

There is a mailing list for AppArmor that users can post to or join to communicate with developers. See https://lists.ubuntu.com/mailman/listinfo/apparmor for details.

28.4 Troubleshooting

This section lists the most common problems and error messages that may occur using AppArmor.

28.4.1 How to React to odd Application Behavior?

If you notice odd application behavior or any other type of application problem, you should first check the reject messages in the log files to see if AppArmor is too closely constricting your application. If you detect reject messages that indicate that your application or service is too closely restricted by AppArmor, update your profile to properly handle your use case of the application. Do this with aa-logprof (Section 24.7.3.9, “aa-logprof—Scanning the System Log”).

If you decide to run your application or service without AppArmor protection, remove the application's profile from /etc/apparmor.d or move it to another location.

28.4.2 My Profiles Do not Seem to Work Anymore …

If you have been using previous versions of AppArmor and have updated your system (but kept your old set of profiles) you might notice some applications which seemed to work perfectly before you updated behaving strangely, or not working.

This version of AppArmor introduces a set of new features to the profile syntax and the AppArmor tools that might cause trouble with older versions of the AppArmor profiles. Those features are:

  • File Locking

  • Network Access Control

  • The SYS_PTRACE Capability

  • Directory Path Access

The current version of AppArmor mediates file locking and introduces a new permission mode (k) for this. Applications requesting file locking permission might misbehave or fail altogether if confined by older profiles which do not explicitly contain permissions to lock files. If you suspect this being the case, check the log file under /var/log/audit/audit.log for entries like the following:

type=AVC msg=audit(1389862802.727:13939): apparmor="DENIED" \
operation="file_lock" parent=2692 profile="/usr/bin/opera" \
name="/home/tux/.qt/.qtrc.lock" pid=28730 comm="httpd2-prefork" \
requested_mask="::k" denied_mask="::k" fsuid=30 ouid=0

Update the profile using the aa-logprof command as outlined below.

The new network access control syntax based on the network family and type specification, described in Section 21.5, “Network Access Control”, might cause application misbehavior or even stop applications from working. If you notice a network-related application behaving strangely, check the log file under /var/log/audit/audit.log for entries like the following:

type=AVC msg=audit(1389864332.233:13947): apparmor="DENIED" \
operation="socket_create" family="inet" parent=29985 profile="/bin/ping" \
sock_type="raw" pid=30251 comm="ping"

This log entry means that our example application, /bin/ping in this case, failed to get AppArmor's permission to open a network connection. This permission needs to be explicitly stated to make sure that an application has network access. To update the profile to the new syntax, use the aa-logprof command as outlined below.

The current kernel requires the SYS_PTRACE capability, if a process tries to access files in /proc/PID/fd/*. New profiles need an entry for the file and the capability, where old profiles only needed the file entry. For example:

/proc/*/fd/**  rw,

in the old syntax would translate to the following rules in the new syntax:

capability SYS_PTRACE,
/proc/*/fd/**  rw,

To update the profile to the new syntax, use the YaST Update Profile Wizard or the aa-logprof command as outlined below.

With this version of AppArmor, a few changes have been made to the profile rule syntax to better distinguish directory from file access. Therefore, some rules matching both file and directory paths in the previous version might now match a file path only. This could lead to AppArmor not being able to access a crucial directory, and thus trigger misbehavior of your application and various log messages. The following examples highlight the most important changes to the path syntax.

Using the old syntax, the following rule would allow access to files and directories in /proc/net. It would allow directory access only to read the entries in the directory, but not give access to files or directories under the directory, for example /proc/net/dir/foo would be matched by the asterisk (*), but as foo is a file or directory under dir, it cannot be accessed.

/proc/net/*  r,

To get the same behavior using the new syntax, you need two rules instead of one. The first allows access to the file under /proc/net and the second allows access to directories under /proc/net. Directory access can only be used for listing the contents, not actually accessing files or directories underneath the directory.

/proc/net/*  r,
/proc/net/*/  r,

The following rule works similarly both under the old and the new syntax, and allows access to both files and directories under /proc/net (but does not allow a directory listing of /proc/net/ itself):

/proc/net/**  r,

To distinguish file access from directory access using the above expression in the new syntax, use the following two rules. The first one only allows to recursively access directories under /proc/net while the second one explicitly allows for recursive file access only.

/proc/net/**/  r,
/proc/net/**[^/]  r,

The following rule works similarly both under the old and the new syntax and allows access to both files and directories beginning with foo under /proc/net:

/proc/net/foo**  r,

To distinguish file access from directory access in the new syntax and use the ** globbing pattern, use the following two rules. The first one would have matched both files and directories in the old syntax, but only matches files in the new syntax because of the missing trailing slash. The second rule matched neither file nor directory in the old syntax, but matches directories only in the new syntax:

/proc/net/**foo  r,
/proc/net/**foo/  r,

The following rules illustrate how the use of the ? globbing pattern has changed. In the old syntax, the first rule would have matched both files and directories (four characters, last character could be any but a slash). In the new syntax, it matches only files (trailing slash is missing). The second rule would match nothing in the old profile syntax, but matches directories only in the new syntax. The last rule matches explicitly matches a file called bar under /proc/net/foo?. Using the old syntax, this rule would have applied to both files and directories:

/proc/net/foo?  r,
/proc/net/foo?/  r,
/proc/net/foo?/bar  r,

To find and resolve issues related to syntax changes, take some time after the update to check the profiles you want to keep and proceed as follows for each application you kept the profile for:

  1. Put the application's profile into complain mode:

    aa-complain /path/to/application

    Log entries are made for any actions violating the current profile, but the profile is not enforced and the application's behavior not restricted.

  2. Run the application covering all the tasks you need this application to be able to perform.

  3. Update the profile according to the log entries made while running the application:

    aa-logprof /path/to/application
  4. Put the resulting profile back into enforce mode:

    aa-enforce /path/to/application

28.4.3 Resolving Issues with Apache

After installing additional Apache modules (like apache2-mod_apparmor) or making configuration changes to Apache, profile Apache again to find out if additional rules need to be added to the profile. If you do not profile Apache again, it could be unable to start properly or be unable to serve Web pages.

28.4.4 How to Exclude Certain Profiles from the List of Profiles Used?

Run aa-disable PROGRAMNAME to disable the profile for PROGRAMNAME. This command creates a symbolic link to the profile in /etc/apparmor.d/disable/. To reactivate the profile, delete the link, and run systemctl reload apparmor.

28.4.5 Can I Manage Profiles for Applications not Installed on my System?

Managing profiles with AppArmor requires you to have access to the log of the system on which the application is running. So you do not need to run the application on your profile build host as long as you have access to the machine that runs the application. You can run the application on one system, transfer the logs (/var/log/audit.log or, if audit is not installed, journalctl | grep -i apparmor > path_to_logfile) to your profile build host and run aa-logprof -f PATH_TO_LOGFILE.

28.4.6 How to Spot and fix AppArmor Syntax Errors?

Manually editing AppArmor profiles can introduce syntax errors. If you attempt to start or restart AppArmor with syntax errors in your profiles, error results are shown. This example shows the syntax of the entire parser error.

localhost:~ # rcapparmor start
Loading AppArmor profiles AppArmor parser error in /etc/apparmor.d/usr.sbin.squid at line 410: syntax error, unexpected TOK_ID, expecting TOK_MODE
 Profile /etc/apparmor.d/usr.sbin.squid failed to load

Using the AppArmor YaST tools, a graphical error message indicates which profile contained the error and requests you to fix it.

To fix a syntax error, log in to a terminal window as root, open the profile, and correct the syntax. Reload the profile set with systemctl reload apparmor.

Tip
Tip: AppArmor Syntax Highlighting in vi

The editor vi on SUSE Linux Enterprise Desktop supports syntax highlighting for AppArmor profiles. Lines containing syntax errors will be displayed with a red background.

28.5 Reporting Bugs for AppArmor

The developers of AppArmor are eager to deliver products of the highest quality. Your feedback and your bug reports help us keep the quality high. Whenever you encounter a bug in AppArmor, file a bug report against this product:

  1. Use your Web browser to go to http://bugzilla.suse.com/ and click Log In.

  2. Enter the account data of your SUSE account and click Login. If you do not have a SUSE account, click Create Account and provide the required data.

  3. If your problem has already been reported, check this bug report and add extra information to it, if necessary.

  4. If your problem has not been reported yet, select New from the top navigation bar and proceed to the Enter Bug page.

  5. Select the product against which to file the bug. In your case, this would be your product's release. Click Submit.

  6. Select the product version, component (AppArmor in this case), hardware platform, and severity.

  7. Enter a brief headline describing your problem and add a more elaborate description including log files. You may create attachments to your bug report for screenshots, log files, or test cases.

  8. Click Submit after you have entered all the details to send your report to the developers.

29 AppArmor Glossary

  • Filename: apparmor_glossary.xml
  • ID: cha.apparmor.glossary
Abstraction

See profile foundation classes below.

Apache

Apache is a freely-available Unix-based Web server. It is currently the most commonly used Web server on the Internet. Find more information about Apache at the Apache Web site at http://www.apache.org.

application fire-walling

AppArmor confines applications and limits the actions they are permitted to take. It uses privilege confinement to prevent attackers from using malicious programs on the protected server and even using trusted applications in unintended ways.

attack signature

Pattern in system or network activity that alerts of a possible virus or hacker attack. Intrusion detection systems might use attack signatures to distinguish between legitimate and potentially malicious activity.

By not relying on attack signatures, AppArmor provides "proactive" instead of "reactive" defense from attacks. This is better because there is no window of vulnerability where the attack signature must be defined for AppArmor as it does for products using attack signatures.

GUI

Graphical user interface. Refers to a software front-end meant to provide an attractive and easy-to-use interface between a computer user and application. Its elements include windows, icons, buttons, cursors, and scrollbars.

globbing

File name substitution. Instead of specifying explicit file name paths, you can use helper characters * (substitutes any number of characters except special ones such as / or ?) and ? (substitutes exactly one character) to address multiple files/directories at once. ** is a special substitution that matches any file or directory below the current directory.

HIP

Host intrusion prevention. Works with the operating system kernel to block abnormal application behavior in the expectation that the abnormal behavior represents an unknown attack. Blocks malicious packets on the host at the network level before they can hurt the application they target.

mandatory access control

A means of restricting access to objects that is based on fixed security attributes assigned to users, files, and other objects. The controls are mandatory in the sense that they cannot be modified by users or their programs.

profile

AppArmor profile completely defines what system resources an individual application can access, and with what privileges.

profile foundation classes

Profile building blocks needed for common application activities, such as DNS lookup and user authentication.

RPM

The RPM Package Manager. An open packaging system available for anyone to use. It works on Red Hat Linux, SUSE Linux Enterprise Desktop, and other Linux and Unix systems. It is capable of installing, uninstalling, verifying, querying, and updating computer software packages. See http://www.rpm.org/ for more information.

SSH

Secure Shell. A service that allows you to access your server from a remote computer and issue text commands through a secure connection.

streamlined access control

AppArmor provides streamlined access control for network services by specifying which files each program is allowed to read, write, and execute. This ensures that each program does what it is supposed to do and nothing else.

URI

Universal resource identifier. The generic term for all types of names and addresses that refer to objects on the World Wide Web. A URL is one kind of URI.

URL

Uniform Resource Locator. The global address of documents and other resources on the Web.

The first part of the address indicates what protocol to use and the second part specifies the IP address or the domain name where the resource is located.

For example, when you visit http://www.suse.com, you are using the HTTP protocol, as the beginning of the URL indicates.

vulnerabilities

An aspect of a system or network that leaves it open to attack. Characteristics of computer systems that allow an individual to keep it from correctly operating or that allows unauthorized users to take control of the system. Design, administrative, or implementation weaknesses or flaws in hardware, firmware, or software. If exploited, a vulnerability could lead to an unacceptable impact in the form of unauthorized access to information or the disruption of critical processing.

Part V The Linux Audit Framework

30 Understanding Linux Audit

The Linux audit framework as shipped with this version of SUSE Linux Enterprise Desktop provides a CAPP-compliant (Controlled Access Protection Profiles) auditing system that reliably collects information about any security-relevant event. The audit records can be examined to determine whether any violation of the security policies has been committed, and by whom.

Providing an audit framework is an important requirement for a CC-CAPP/EAL (Common Criteria-Controlled Access Protection Profiles/Evaluation Assurance Level) certification. Common Criteria (CC) for Information Technology Security Information is an international standard for independent security evaluations. Common Criteria helps customers judge the security level of any IT product they intend to deploy in mission-critical setups.

Common Criteria security evaluations have two sets of evaluation requirements, functional and assurance requirements. Functional requirements describe the security attributes of the product under evaluation and are summarized under the Controlled Access Protection Profiles (CAPP). Assurance requirements are summarized under the Evaluation Assurance Level (EAL). EAL describes any activities that must take place for the evaluators to be confident that security attributes are present, effective, and implemented. Examples for activities of this kind include documenting the developers' search for security vulnerabilities, the patch process, and testing.

This guide provides a basic understanding of how audit works and how it can be set up. For more information about Common Criteria itself, refer to the Common Criteria Web site.

31 Setting Up the Linux Audit Framework

This chapter shows how to set up a simple audit scenario. Every step involved in configuring and enabling audit is explained in detail. After you have learned to set up audit, consider a real-world example scenario in Chapter 32, Introducing an Audit Rule Set.

32 Introducing an Audit Rule Set

The following example configuration illustrates how audit can be used to monitor your system. It highlights the most important items that need to be audited to cover the list of auditable events specified by Controlled Access Protection Profile (CAPP).

33 Useful Resources

There are other resources available containing valuable information about the Linux audit framework:

30 Understanding Linux Audit

  • Filename: audit_components.xml
  • ID: cha.audit.comp
Abstract

The Linux audit framework as shipped with this version of SUSE Linux Enterprise Desktop provides a CAPP-compliant (Controlled Access Protection Profiles) auditing system that reliably collects information about any security-relevant event. The audit records can be examined to determine whether any violation of the security policies has been committed, and by whom.

Providing an audit framework is an important requirement for a CC-CAPP/EAL (Common Criteria-Controlled Access Protection Profiles/Evaluation Assurance Level) certification. Common Criteria (CC) for Information Technology Security Information is an international standard for independent security evaluations. Common Criteria helps customers judge the security level of any IT product they intend to deploy in mission-critical setups.

Common Criteria security evaluations have two sets of evaluation requirements, functional and assurance requirements. Functional requirements describe the security attributes of the product under evaluation and are summarized under the Controlled Access Protection Profiles (CAPP). Assurance requirements are summarized under the Evaluation Assurance Level (EAL). EAL describes any activities that must take place for the evaluators to be confident that security attributes are present, effective, and implemented. Examples for activities of this kind include documenting the developers' search for security vulnerabilities, the patch process, and testing.

This guide provides a basic understanding of how audit works and how it can be set up. For more information about Common Criteria itself, refer to the Common Criteria Web site.

Linux audit helps make your system more secure by providing you with a means to analyze what is happening on your system in great detail. It does not, however, provide additional security itself—it does not protect your system from code malfunctions or any kind of exploits. Instead, audit is useful for tracking these issues and helps you take additional security measures, like AppArmor, to prevent them.

Audit consists of several components, each contributing crucial functionality to the overall framework. The audit kernel module intercepts the system calls and records the relevant events. The auditd daemon writes the audit reports to disk. Various command line utilities take care of displaying, querying, and archiving the audit trail.

Audit enables you to do the following:

Associate Users with Processes

Audit maps processes to the user ID that started them. This makes it possible for the administrator or security officer to exactly trace which user owns which process and is potentially doing malicious operations on the system.

Important
Important: Renaming User IDs

Audit does not handle the renaming of UIDs. Therefore avoid renaming UIDs (for example, changing tux from uid=1001 to uid=2000) and obsolete UIDs rather than renaming them. Otherwise you would need to change auditctl data (audit rules) and would have problems retrieving old data correctly.

Review the Audit Trail

Linux audit provides tools that write the audit reports to disk and translate them into human readable format.

Review Particular Audit Events

Audit provides a utility that allows you to filter the audit reports for certain events of interest. You can filter for:

  • User

  • Group

  • Audit ID

  • Remote Host Name

  • Remote Host Address

  • System Call

  • System Call Arguments

  • File

  • File Operations

  • Success or Failure

Apply a Selective Audit

Audit provides the means to filter the audit reports for events of interest and to tune audit to record only selected events. You can create your own set of rules and have the audit daemon record only those of interest to you.

Guarantee the Availability of the Report Data

Audit reports are owned by root and therefore only removable by root. Unauthorized users cannot remove the audit logs.

Prevent Audit Data Loss

If the kernel runs out of memory, the audit daemon's backlog is exceeded, or its rate limit is exceeded, audit can trigger a shutdown of the system to keep events from escaping audit's control. This shutdown would be an immediate halt of the system triggered by the audit kernel component without synchronizing the latest logs to disk. The default configuration is to log a warning to syslog rather than to halt the system.

If the system runs out of disk space when logging, the audit system can be configured to perform clean shutdown. The default configuration tells the audit daemon to stop logging when it runs out of disk space.

30.1 Introducing the Components of Linux Audit

The following figure illustrates how the various components of audit interact with each other:

Introducing the Components of Linux Audit
Figure 30.1: Introducing the Components of Linux Audit

Straight arrows represent the data flow between components while dashed arrows represent lines of control between components.

auditd

The audit daemon is responsible for writing the audit messages that were generated through the audit kernel interface and triggered by application and system activity to disk. The way the audit daemon is started is controlled by systemd. The audit system functions (when started) are controlled by /etc/audit/auditd.conf. For more information about auditd and its configuration, refer to Section 30.2, “Configuring the Audit Daemon”.

auditctl

The auditctl utility controls the audit system. It controls the log generation parameters and kernel settings of the audit interface and the rule sets that determine which events are tracked. For more information about auditctl, refer to Section 30.3, “Controlling the Audit System Using auditctl.

audit rules

The file /etc/audit/audit.rules contains a sequence of auditctl commands that are loaded at system boot time immediately after the audit daemon is started. For more information about audit rules, refer to Section 30.4, “Passing Parameters to the Audit System”.

aureport

The aureport utility allows you to create custom reports from the audit event log. This report generation can easily be scripted, and the output can be used by various other applications, for example, to plot these results. For more information about aureport, refer to Section 30.5, “Understanding the Audit Logs and Generating Reports”.

ausearch

The ausearch utility can search the audit log file for certain events using various keys or other characteristics of the logged format. For more information about ausearch, refer to Section 30.6, “Querying the Audit Daemon Logs with ausearch.

audispd

The audit dispatcher daemon (audispd) can be used to relay event notifications to other applications instead of (or in addition to) writing them to disk in the audit log. For more information about audispd, refer to Section 30.9, “Relaying Audit Event Notifications”.

autrace

The autrace utility traces individual processes in a fashion similar to strace. The output of autrace is logged to the audit log. For more information about autrace, refer to Section 30.7, “Analyzing Processes with autrace.

aulast

Prints a list of the last logged-in users, similarly to last. aulast searches back through the audit logs (or the given audit log file) and displays a list of all users logged in and out based on the range of time in the audit logs.

aulastlog

Prints the last login for all users of a machine similar to the way lastlog does. The login name, port, and last login time will be printed.

30.2 Configuring the Audit Daemon

Before you can actually start generating audit logs and processing them, configure the audit daemon itself. The /etc/audit/auditd.conf configuration file determines how the audit system functions when the daemon has been started. For most use cases, the default settings shipped with SUSE Linux Enterprise Desktop should suffice. For CAPP environments, most of these parameters need tweaking. The following list briefly introduces the parameters available:

log_file = /var/log/audit/audit.log
log_format = RAW
log_group = root
priority_boost = 4
flush = INCREMENTAL
freq = 20
num_logs = 5
disp_qos = lossy
dispatcher = /sbin/audispd
name_format = NONE
##name = mydomain
max_log_file = 6
max_log_file_action = ROTATE
space_left = 75
space_left_action = SYSLOG
action_mail_acct = root
admin_space_left = 50
admin_space_left_action = SUSPEND
disk_full_action = SUSPEND
disk_error_action = SUSPEND
##tcp_listen_port =
tcp_listen_queue = 5
tcp_max_per_addr = 1
##tcp_client_ports = 1024-65535
tcp_client_max_idle = 0
cp_client_max_idle = 0

Depending on whether you want your environment to satisfy the requirements of CAPP, you need to be extra restrictive when configuring the audit daemon. Where you need to use particular settings to meet the CAPP requirements, a CAPP Environment note tells you how to adjust the configuration.

log_file, log_format and log_group

log_file specifies the location where the audit logs should be stored. log_format determines how the audit information is written to disk and log_group defines the group that owns the log files. Possible values for log_format are raw (messages are stored exactly as the kernel sends them) or nolog (messages are discarded and not written to disk). The data sent to the audit dispatcher is not affected if you use the nolog mode. The default setting is raw and you should keep it if you want to be able to create reports and queries against the audit logs using the aureport and ausearch tools. The value for log_group can either be specified literally or using the group's ID.

Note
Note: CAPP Environment

In a CAPP environment, have the audit log reside on its own partition. By doing so, you can be sure that the space detection of the audit daemon is accurate and that you do not have other processes consuming this space.

priority_boost

Determine how much of a priority boost the audit daemon should get. Possible values are 0 to 20. The resulting nice value calculates like this: 0 - priority_boost

flush and freq

Specifies whether, how, and how often the audit logs should be written to disk. Valid values for flush are none, incremental, data, and sync. none tells the audit daemon not to make any special effort to write the audit data to disk. incremental tells the audit daemon to explicitly flush the data to disk. A frequency must be specified if incremental is used. A freq value of 20 tells the audit daemon to request that the kernel flush the data to disk after every 20 records. The data option keeps the data portion of the disk file synchronized at all times while the sync option takes care of both metadata and data.

Note
Note: CAPP Environment

In a CAPP environment, make sure that the audit trail is always fully up to date and complete. Therefore, use sync or data with the flush parameter.

num_logs

Specify the number of log files to keep if you have given rotate as the max_log_file_action. Possible values range from 0 to 99. A value less than 2 means that the log files are not rotated. As you increase the number of files to rotate, you increase the amount of work required of the audit daemon. While doing this rotation, auditd cannot always service new data arriving from the kernel as quickly, which can result in a backlog condition (triggering auditd to react according to the failure flag, described in Section 30.3, “Controlling the Audit System Using auditctl). In this situation, increasing the backlog limit is recommended. Do so by changing the value of the -b parameter in the /etc/audit/audit.rules file.

disp_qos and dispatcher

The dispatcher is started by the audit daemon during its start. The audit daemon relays the audit messages to the application specified in dispatcher. This application must be a highly trusted one, because it needs to run as root. disp_qos determines whether you allow for lossy or lossless communication between the audit daemon and the dispatcher.

If you select lossy, the audit daemon might discard some audit messages when the message queue is full. These events still get written to disk if log_format is set to raw, but they might not get through to the dispatcher. If you select lossless the audit logging to disk is blocked until there is an empty spot in the message queue. The default value is lossy.

name_format and name

name_format controls how computer names are resolved. Possible values are none (no name will be used), hostname (value returned by gethostname), fqd (fully qualified host name as received through a DNS lookup), numeric (IP address) and user. user is a custom string that needs to be defined with the name parameter.

max_log_file and max_log_file_action

max_log_file takes a numerical value that specifies the maximum file size in megabytes that the log file can reach before a configurable action is triggered. The action to be taken is specified in max_log_file_action. Possible values for max_log_file_action are ignore, syslog, suspend, rotate, and keep_logs. ignore tells the audit daemon to do nothing when the size limit is reached, syslog tells it to issue a warning and send it to syslog, and suspend causes the audit daemon to stop writing logs to disk, leaving the daemon itself still alive. rotate triggers log rotation using the num_logs setting. keep_logs also triggers log rotation, but does not use the num_log setting, so always keeps all logs.

Note
Note: CAPP Environment

To keep a complete audit trail in CAPP environments, the keep_logs option should be used. If using a separate partition to hold your audit logs, adjust max_log_file and num_logs to use the entire space available on that partition. Note that the more files that need to be rotated, the longer it takes to get back to receiving audit events.

space_left and space_left_action

space_left takes a numerical value in megabytes of remaining disk space that triggers a configurable action by the audit daemon. The action is specified in space_left_action. Possible values for this parameter are ignore, syslog, email, exec, suspend, single, and halt. ignore tells the audit daemon to ignore the warning and do nothing, syslog has it issue a warning to syslog, and email sends an e-mail to the account specified under action_mail_acct. exec plus a path to a script executes the given script. Note that it is not possible to pass parameters to the script. suspend tells the audit daemon to stop writing to disk but remain alive while single triggers the system to be brought down to single user mode. halt triggers a full shutdown of the system.

Note
Note: CAPP Environment

Make sure that space_left is set to a value that gives the administrator enough time to react to the alert and allows it to free enough disk space for the audit daemon to continue to work. Freeing disk space would involve calling aureport -t and archiving the oldest logs on a separate archiving partition or resource. The actual value for space_left depends on the size of your deployment. Set space_left_action to email.

action_mail_acct

Specify an e-mail address or alias to which any alert messages should be sent. The default setting is root, but you can enter any local or remote account as long as e-mail and the network are properly configured on your system and /usr/lib/sendmail exists.

admin_space_left and admin_space_left_action

admin_space_left takes a numerical value in megabytes of remaining disk space. The system is already running low on disk space when this limit is reached and the administrator has one last chance to react to this alert and free disk space for the audit logs. The value of admin_space_left should be lower than the value for space_left. The possible values for admin_space_left_action are the same as for space_left_action.

Note
Note: CAPP Environment

Set admin_space_left to a value that would allow the administrator's actions to be recorded. The action should be set to single.

disk_full_action

Specify which action to take when the system runs out of disk space for the audit logs. Valid values are ignore, syslog, rotate, exec, suspend, single, and halt. For an explanation of these values refer to space_left and space_left_action .

Note
Note: CAPP Environment

As the disk_full_action is triggered when there is absolutely no more room for any audit logs, you should bring the system down to single-user mode (single) or shut it down completely (halt).

disk_error_action

Specify which action to take when the audit daemon encounters any kind of disk error while writing the logs to disk or rotating the logs. The possible value are the same as for space_left_action.

Note
Note: CAPP Environment

Use syslog, single, or halt depending on your site's policies regarding the handling of any kind of hardware failure.

tcp_listen_port, tcp_listen_queue, tcp_client_ports, tcp_client_max_idle, and tcp_max_per_addr

The audit daemon can receive audit events from other audit daemons. The tcp parameters let you control incoming connections. Specify a port between 1 and 65535 with tcp_listen_port on which the auditd will listen. tcp_listen_queue lets you configure a maximum value for pending connections. Make sure not to set a value too small, since the number of pending connections may be high under certain circumstances, such as after a power outage. tcp_client_ports defines which client ports are allowed. Either specify a single port or a port range with numbers separated by a dash (for example 1-1023 for all privileged ports).

Specifying a single allowed client port may make it difficult for the client to restart their audit subsystem, as it will be unable to re-create a connection with the same host addresses and ports until the connection closure TIME_WAIT state times out. If a client does not respond anymore, auditd complains. Specify the number of seconds after which this will happen with tcp_client_max_idle. Keep in mind that this setting is valid for all clients and therefore should be higher than any individual client heartbeat setting, preferably by a factor of two. tcp_max_per_addr is a numeric value representing how many concurrent connections from one IP address are allowed.

Tip
Tip

We recommend using privileged ports for client and server to prevent non-root (CAP_NET_BIND_SERVICE) programs from binding to those ports.

When the daemon configuration in /etc/audit/auditd.conf is complete, the next step is to focus on controlling the amount of auditing the daemon does, and to assign sufficient resources and limits to the daemon so it can operate smoothly.

30.3 Controlling the Audit System Using auditctl

auditctl is responsible for controlling the status and some basic system parameters of the audit daemon. It controls the amount of auditing performed on the system. Using audit rules, auditctl controls which components of your system are subjected to the audit and to what extent they are audited. Audit rules can be passed to the audit daemon on the auditctl command line or by composing a rule set and instructing the audit daemon to process this file. By default, the auditd daemon is configured to check for audit rules under /etc/audit/audit.rules. For more details on audit rules, refer to Section 30.4, “Passing Parameters to the Audit System”.

The main auditctl commands to control basic audit system parameters are:

  • auditctl -e to enable or disable audit

  • auditctl -f to control the failure flag

  • auditctl -r to control the rate limit for audit messages

  • auditctl -b to control the backlog limit

  • auditctl -s to query the current status of the audit daemon

    Tip
    Tip

    Before running auditctl -S on your system, add -F arch=b64 to prevent the architecture mismatch warning.

The -e, -f, -r, and -b options can also be specified in the audit.rules file to avoid having to enter them each time the audit daemon is started.

Any time you query the status of the audit daemon with auditctl -s or change the status flag with auditctl -eFLAG, a status message (including information on each of the above-mentioned parameters) is printed. The following example highlights the typical audit status message.

Example 30.1: Example output of auditctl -s
AUDIT_STATUS: enabled=1 flag=2 pid=3105 rate_limit=0 backlog_limit=8192 lost=0 backlog=0
Table 30.1: Audit Status Flags

Flag

Meaning [Possible Values]

Command

enabled

Set the enable flag. [0..2] 0=disable, 1=enable, 2=enable and lock down the configuration

auditctl -e [0|1|2]

flag

Set the failure flag. [0..2] 0=silent, 1=printk, 2=panic (immediate halt without synchronizing pending data to disk)

auditctl -f [0|1|2]

pid

Process ID under which auditd is running.

rate_limit

Set a limit in messages per second. If the rate is not zero and is exceeded, the action specified in the failure flag is triggered.

auditctl -r RATE

backlog_limit

Specify the maximum number of outstanding audit buffers allowed. If all buffers are full, the action specified in the failure flag is triggered.

auditctl -b BACKLOG

lost

Count the current number of lost audit messages.

backlog

Count the current number of outstanding audit buffers.

30.4 Passing Parameters to the Audit System

Commands to control the audit system can be invoked individually from the shell using auditctl or batch read from a file using auditctl - R. This latter method is used by the init scripts to load rules from the file /etc/audit/audit.rules after the audit daemon has been started. The rules are executed in order from top to bottom. Each of these rules would expand to a separate auditctl command. The syntax used in the rules file is the same as that used for the auditctl command.

Changes made to the running audit system by executing auditctl on the command line are not persistent across system restarts. For changes to persist, add them to the /etc/audit/audit.rules file and, if they are not currently loaded into audit, restart the audit system to load the modified rule set by using the systemctl restart auditd command.

Example 30.2: Example Audit Rules—Audit System Parameters
-b 10001
-f 12
-r 103
-e 14

1

Specify the maximum number of outstanding audit buffers. Depending on the level of logging activity, you might need to adjust the number of buffers to avoid causing too heavy an audit load on your system.

2

Specify the failure flag to use. See Table 30.1, “Audit Status Flags” for possible values.

3

Specify the maximum number of messages per second that may be issued by the kernel. See Table 30.1, “Audit Status Flags” for details.

4

Enable or disable the audit subsystem.

Using audit, you can track any kind of file system access to important files, configurations or resources. You can add watches on these and assign keys to each kind of watch for better identification in the logs.

Example 30.3: Example Audit Rules—File System Auditing
-w /etc/shadow1
-w /etc -p rx2
-w /etc/passwd -k fk_passwd -p rwxa3

1

The -w option tells audit to add a watch to the file specified, in this case /etc/shadow. All system calls requesting access permissions to this file are analyzed.

2

This rule adds a watch to the /etc directory and applies permission filtering for read and execute access to this directory (-p rx). Any system call requesting any of these two permissions is analyzed. Only the creation of new files and the deletion of existing ones are logged as directory-related events. To get more specific events for files located under this particular directory, you should add a separate rule for each file. A file must exist before you add a rule containing a watch on it. Auditing files as they are created is not supported.

3

This rule adds a file watch to /etc/passwd. Permission filtering is applied for read, write, execute, and attribute change permissions. The -k option allows you to specify a key to use to filter the audit logs for this particular event later (for example with ausearch). You may use the same key on different rules to be able to group rules when searching for them. It is also possible to apply multiple keys to a rule.

System call auditing lets you track your system's behavior on a level even below the application level. When designing these rules, consider that auditing a great many system calls may increase your system load and cause you to run out of disk space. Consider carefully which events need tracking and how they can be filtered to be even more specific.

Example 30.4: Example Audit Rules—System Call Auditing
-a exit,always -S mkdir1
-a exit,always -S access -F a1=42
-a exit,always -S ipc -F a0=23
-a exit,always -S open -F success!=04
-a task,always -F auid=05
-a task,always -F uid=0 -F auid=501 -F gid=wheel6

1

This rule activates auditing for the mkdir system call. The -a option adds system call rules. This rule triggers an event whenever the mkdir system call is entered (exit, always). The -S option specifies the system call to which this rule should be applied.

2

This rule adds auditing to the access system call, but only if the second argument of the system call (mode) is 4 (R_OK). exit,always tells audit to add an audit context to this system call when entering it, and to write out a report when it gets audited.

3

This rule adds an audit context to the IPC multiplexed system call. The specific ipc system call is passed as the first syscall argument and can be selected using -F a0=IPC_CALL_NUMBER.

4

This rule audits failed attempts to call open.

5

This rule is an example of a task rule (keyword: task). It is different from the other rules above in that it applies to processes that are forked or cloned. To filter these kind of events, you can only use fields that are known at fork time, such as UID, GID, and AUID. This example rule filters for all tasks carrying an audit ID of 0.

6

This last rule makes heavy use of filters. All filter options are combined with a logical AND operator, meaning that this rule applies to all tasks that carry the audit ID of 501, run as root, and have wheel as the group. A process is given an audit ID on user login. This ID is then handed down to any child process started by the initial process of the user. Even if the user changes his identity, the audit ID stays the same and allows tracing actions to the original user.

Tip
Tip: Filtering System Call Arguments

For more details on filtering system call arguments, refer to Section 32.6, “Filtering System Call Arguments”.

You cannot only add rules to the audit system, but also remove them. There are different methods for deleting the entire rule set at once or for deleting system call rules or file and directory watches:

Example 30.5: Deleting Audit Rules and Events
-D1
-d exit,always -S mkdir2
-W /etc3

1

Clear the queue of audit rules and delete any preexisting rules. This rule is used as the first rule in /etc/audit/audit.rules files to make sure that the rules that are about to be added do not clash with any preexisting ones. The auditctl -D command is also used before doing an autrace to avoid having the trace rules clash with any rules present in the audit.rules file.

2

This rule deletes a system call rule. The -d option must precede any system call rule that needs to be deleted from the rule queue, and must match exactly.

3

This rule tells audit to discard the rule with the directory watch on /etc from the rules queue. This rule deletes any rule containing a directory watch on /etc, regardless of any permission filtering or key options.

To get an overview of which rules are currently in use in your audit setup, run auditctl -l. This command displays all rules with one rule per line.

Example 30.6: Listing Rules with auditctl -l
exit,always watch=/etc perm=rx
exit,always watch=/etc/passwd perm=rwxa key=fk_passwd
exit,always watch=/etc/shadow perm=rwxa
exit,always syscall=mkdir
exit,always a1=4 (0x4) syscall=access
exit,always a0=2 (0x2) syscall=ipc
exit,always success!=0 syscall=open
Note
Note: Creating Filter Rules

You can build very sophisticated audit rules by using the various filter options. Refer to the auditctl(8) man page for more information about the options available for building audit filter rules, and audit rules in general.

30.5 Understanding the Audit Logs and Generating Reports

To understand what the aureport utility does, it is vital to know how the logs generated by the audit daemon are structured, and what exactly is recorded for an event. Only then can you decide which report types are most appropriate for your needs.

30.5.1 Understanding the Audit Logs

The following examples highlight two typical events that are logged by audit and how their trails in the audit log are read. The audit log or logs (if log rotation is enabled) are stored in the /var/log/audit directory. The first example is a simple less command. The second example covers a great deal of PAM activity in the logs when a user tries to remotely log in to a machine running audit.

Example 30.7: A Simple Audit Event—Viewing the Audit Log
type=SYSCALL msg=audit(1234874638.599:5207): arch=c000003e syscall=2 success=yes exit=4 a0=62fb60 a1=0 a2=31 a3=0 items=1 ppid=25400 pid
=25616 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=1164 comm="less" exe="/usr/bin/less" key="doc_log"
type=CWD msg=audit(1234874638.599:5207):  cwd="/root"
type=PATH msg=audit(1234874638.599:5207): item=0 name="/var/log/audit/audit.log" inode=1219041 dev=08:06 mode=0100644 ouid=0 ogid=0 rdev=00:00

The above event, a simple less /var/log/audit/audit.log, wrote three messages to the log. All of them are closely linked together and you would not be able to make sense of one of them without the others. The first message reveals the following information:

type

The type of event recorded. In this case, it assigns the SYSCALL type to an event triggered by a system call. The CWD event was recorded to record the current working directory at the time of the syscall. A PATH event is generated for each path passed to the system call. The open system call takes only one path argument, so only generates one PATH event. It is important to understand that the PATH event reports the path name string argument without any further interpretation, so a relative path requires manual combination with the path reported by the CWD event to determine the object accessed.

msg

A message ID enclosed in brackets. The ID splits into two parts. All characters before the : represent a Unix epoch time stamp. The number after the colon represents the actual event ID. All events that are logged from one application's system call have the same event ID. If the application makes a second system call, it gets another event ID.

arch

References the CPU architecture of the system call. Decode this information using the -i option on any of your ausearch commands when searching the logs.

syscall

The type of system call as it would have been printed by an strace on this particular system call. This data is taken from the list of system calls under /usr/include/asm/unistd.h and may vary depending on the architecture. In this case, syscall=2 refers to the open system call (see man open(2)) invoked by the less application.

success

Whether the system call succeeded or failed.

exit

The exit value returned by the system call. For the open system call used in this example, this is the file descriptor number. This varies by system call.

a0 to a3

The first four arguments to the system call in numeric form. The values of these are system call dependent. In this example (an open system call), the following are used:

a0=62fb60 a1=8000 a2=31 a3=0

a0 is the start address of the passed path name. a1 is the flags. 8000 in hex notation translates to 100000 in octal notation, which in turn translates to O_LARGEFILE. a2 is the mode, which, because O_CREAT was not specified, is unused. a3 is not passed by the open system call. Check the manual page of the relevant system call to find out which arguments are used with it.

items

The number of strings passed to the application.

ppid

The process ID of the parent of the process analyzed.

pid

The process ID of the process analyzed.

auid

The audit ID. A process is given an audit ID on user login. This ID is then handed down to any child process started by the initial process of the user. Even if the user changes his identity (for example, becomes root), the audit ID stays the same. Thus you can always trace actions to the original user who logged in.

uid

The user ID of the user who started the process. In this case, 0 for root.

gid

The group ID of the user who started the process. In this case, 0 for root.

euid, suid, fsuid

Effective user ID, set user ID, and file system user ID of the user that started the process.

egid, sgid, fsgid

Effective group ID, set group ID, and file system group ID of the user that started the process.

tty

The terminal from which the application was started. In this case, a pseudo-terminal used in an SSH session.

ses

The login session ID. This process attribute is set when a user logs in and can tie any process to a particular user login.

comm

The application name under which it appears in the task list.

exe

The resolved path name to the binary program.

subj

auditd records whether the process is subject to any security context, such as AppArmor. unconstrained, as in this case, means that the process is not confined with AppArmor. If the process had been confined, the binary path name plus the AppArmor profile mode would have been logged.

key

If you are auditing many directories or files, assign key strings to each of these watches. You can use these keys with ausearch to search the logs for events of this type only.

The second message triggered by the example less call does not reveal anything apart from the current working directory when the less command was executed.

The third message reveals the following (the type and message flags have already been introduced):

item

In this example, item references the a0 argument—a path—that is associated with the original SYSCALL message. Had the original call had more than one path argument (such as a cp or mv command), an additional PATH event would have been logged for the second path argument.

name

Refers to the path name passed as an argument to the open system call.

inode

Refers to the inode number corresponding to name.

dev

Specifies the device on which the file is stored. In this case, 08:06, which stands for /dev/sda1 or first partition on the first IDE device.

mode

Numerical representation of the file's access permissions. In this case, root has read and write permissions and his group (root) has read access while the entire rest of the world cannot access the file.

ouid and ogid

Refer to the UID and GID of the inode itself.

rdev

Not applicable for this example. The rdev entry only applies to block or character devices, not to files.

Example 30.8, “An Advanced Audit Event—Login via SSH” highlights the audit events triggered by an incoming SSH connection. Most of the messages are related to the PAM stack and reflect the different stages of the SSH PAM process. Several of the audit messages carry nested PAM messages in them that signify that a particular stage of the PAM process has been reached. Although the PAM messages are logged by audit, audit assigns its own message type to each event:

Example 30.8: An Advanced Audit Event—Login via SSH
type=USER_AUTH msg=audit(1234877011.791:7731): user pid=26127 uid=0 1
auid=4294967295 ses=4294967295 msg='op=PAM:authentication acct="root" exe="/usr/sbin/sshd"
(hostname=jupiter.example.com, addr=192.168.2.100, terminal=ssh res=success)'
type=USER_ACCT msg=audit(1234877011.795:7732): user pid=26127 uid=0 2
auid=4294967295 ses=4294967295 msg='op=PAM:accounting acct="root" exe="/usr/sbin/sshd"
(hostname=jupiter.example.com, addr=192.168.2.100, terminal=ssh res=success)'
type=CRED_ACQ msg=audit(1234877011.799:7733): user pid=26125 uid=0 3
auid=4294967295 ses=4294967295 msg='op=PAM:setcred acct="root" exe="/usr/sbin/sshd"
(hostname=jupiter.example.com, addr=192.168.2.100, terminal=/dev/pts/0 res=success)'
type=LOGIN msg=audit(1234877011.799:7734): login pid=26125 uid=0
old auid=4294967295 new auid=0 old ses=4294967295 new ses=1172
type=USER_START msg=audit(1234877011.799:7735): user pid=26125 uid=0 4
auid=0 ses=1172 msg='op=PAM:session_open acct="root" exe="/usr/sbin/sshd"
(hostname=jupiter.example.com, addr=192.168.2.100, terminal=/dev/pts/0 res=success)'
type=USER_LOGIN msg=audit(1234877011.823:7736): user pid=26128 uid=0 5
auid=0 ses=1172 msg='uid=0: exe="/usr/sbin/sshd"
(hostname=jupiter.example.com, addr=192.168.2.100, terminal=/dev/pts/0 res=success)'
type=CRED_REFR msg=audit(1234877011.828:7737): user pid=26128 uid=0 6
auid=0 ses=1172 msg='op=PAM:setcred acct="root" exe="/usr/sbin/sshd"
(hostname=jupiter.example.com, addr=192.168.2.100, terminal=/dev/pts/0 res=success)'

1

PAM reports that is has successfully requested user authentication for root from a remote host (jupiter.example.com, 192.168.2.100). The terminal where this is happening is ssh.

2

PAM reports that it has successfully determined whether the user is authorized to log in.

3

PAM reports that the appropriate credentials to log in have been acquired and that the terminal changed to a normal terminal (/dev/pts0).

4

PAM reports that it has successfully opened a session for root.

5

The user has successfully logged in. This event is the one used by aureport -l to report about user logins.

6

PAM reports that the credentials have been successfully reacquired.

30.5.2 Generating Custom Audit Reports

The raw audit reports stored in the /var/log/audit directory tend to become very bulky and hard to understand. To more easily find relevant messages, use the aureport utility and create custom reports.

The following use cases highlight a few of the possible report types that you can generate with aureport:

Read Audit Logs from Another File

When the audit logs have moved to another machine or when you want to analyze the logs of several machines on your local machine without wanting to connect to each of these individually, move the logs to a local file and have aureport analyze them locally:

aureport -if myfile

Summary Report
======================
Range of time in logs: 03/02/09 14:13:38.225 - 17/02/09 14:52:27.971
Selected time for report: 03/02/09 14:13:38 - 17/02/09 14:52:27.971
Number of changes in configuration: 13
Number of changes to accounts, groups, or roles: 0
Number of logins: 6
Number of failed logins: 13
Number of authentications: 7
Number of failed authentications: 573
Number of users: 1
Number of terminals: 9
Number of host names: 4
Number of executables: 17
Number of files: 279
Number of AVC's: 0
Number of MAC events: 0
Number of failed syscalls: 994
Number of anomaly events: 0
Number of responses to anomaly events: 0
Number of crypto events: 0
Number of keys: 2
Number of process IDs: 1211
Number of events: 5320

The above command, aureport without any arguments, provides only the standard general summary report generated from the logs contained in myfile. To create more detailed reports, combine the -if option with any of the options below. For example, generate a login report that is limited to a certain time frame:

aureport -l -ts 14:00 -te 15:00 -if myfile

Login Report
============================================
# date time auid host term exe success event
============================================
1. 17/02/09 14:21:09 root: 192.168.2.100 sshd /usr/sbin/sshd no 7718
2. 17/02/09 14:21:15 0 jupiter /dev/pts/3 /usr/sbin/sshd yes 7724
Convert Numeric Entities to Text

Some information, such as user IDs, are printed in numeric form. To convert these into a human-readable text format, add the -i option to your aureport command.

Create a Rough Summary Report

If you are interested in the current audit statistics (events, logins, processes, etc.), run aureport without any other option.

Create a Summary Report of Failed Events

If you want to break down the overall statistics of plain aureport to the statistics of failed events, use aureport --failed:

aureport --failed

Failed Summary Report
======================
Range of time in logs: 03/02/09 14:13:38.225 - 17/02/09 14:57:35.183
Selected time for report: 03/02/09 14:13:38 - 17/02/09 14:57:35.183
Number of changes in configuration: 0
Number of changes to accounts, groups, or roles: 0
Number of logins: 0
Number of failed logins: 13
Number of authentications: 0
Number of failed authentications: 574
Number of users: 1
Number of terminals: 5
Number of host names: 4
Number of executables: 11
Number of files: 77
Number of AVC's: 0
Number of MAC events: 0
Number of failed syscalls: 994
Number of anomaly events: 0
Number of responses to anomaly events: 0
Number of crypto events: 0
Number of keys: 2
Number of process IDs: 708
Number of events: 1583
Create a Summary Report of Successful Events

If you want to break down the overall statistics of a plain aureport to the statistics of successful events, use aureport --success:

aureport --success

Success Summary Report
======================
Range of time in logs: 03/02/09 14:13:38.225 - 17/02/09 15:00:01.535
Selected time for report: 03/02/09 14:13:38 - 17/02/09 15:00:01.535
Number of changes in configuration: 13
Number of changes to accounts, groups, or roles: 0
Number of logins: 6
Number of failed logins: 0
Number of authentications: 7
Number of failed authentications: 0
Number of users: 1
Number of terminals: 7
Number of host names: 3
Number of executables: 16
Number of files: 215
Number of AVC's: 0
Number of MAC events: 0
Number of failed syscalls: 0
Number of anomaly events: 0
Number of responses to anomaly events: 0
Number of crypto events: 0
Number of keys: 2
Number of process IDs: 558
Number of events: 3739
Create Summary Reports

In addition to the dedicated summary reports (main summary and failed and success summary), use the --summary option with most of the other options to create summary reports for a particular area of interest only. Not all reports support this option, however. This example creates a summary report for user login events:

aureport -u -i --summary

User Summary Report
===========================
total  auid
===========================
5640  root
13  tux
3  wilber
Create a Report of Events

To get an overview of the events logged by audit, use the aureport -e command. This command generates a numbered list of all events including date, time, event number, event type, and audit ID.

aureport -e -ts 14:00 -te 14:21

Event Report
===================================
# date time event type auid success
===================================
1. 17/02/09 14:20:27 7462 DAEMON_START 0 yes
2. 17/02/09 14:20:27 7715 CONFIG_CHANGE 0 yes
3. 17/02/09 14:20:57 7716 USER_END 0 yes
4. 17/02/09 14:20:57 7717 CRED_DISP 0 yes
5. 17/02/09 14:21:09 7718 USER_LOGIN -1 no
6. 17/02/09 14:21:15 7719 USER_AUTH -1 yes
7. 17/02/09 14:21:15 7720 USER_ACCT -1 yes
8. 17/02/09 14:21:15 7721 CRED_ACQ -1 yes
9. 17/02/09 14:21:15 7722 LOGIN 0 yes
10. 17/02/09 14:21:15 7723 USER_START 0 yes
11. 17/02/09 14:21:15 7724 USER_LOGIN 0 yes
12. 17/02/09 14:21:15 7725 CRED_REFR 0 yes
Create a Report from All Process Events

To analyze the log from a process's point of view, use the aureport -p command. This command generates a numbered list of all process events including date, time, process ID, name of the executable, system call, audit ID, and event number.

aureport -p

Process ID Report
======================================
# date time pid exe syscall auid event
======================================
1. 13/02/09 15:30:01 32742 /usr/sbin/cron 0 0 35
2. 13/02/09 15:30:01 32742 /usr/sbin/cron 0 0 36
3. 13/02/09 15:38:34 32734 /usr/lib/gdm/gdm-session-worker 0 -1 37
Create a Report from All System Call Events

To analyze the audit log from a system call's point of view, use the aureport -s command. This command generates a numbered list of all system call events including date, time, number of the system call, process ID, name of the command that used this call, audit ID, and event number.

aureport -s

Syscall Report
=======================================
# date time syscall pid comm auid event
=======================================
1. 16/02/09 17:45:01 2 20343 cron -1 2279
2. 16/02/09 17:45:02 83 20350 mktemp 0 2284
3. 16/02/09 17:45:02 83 20351 mkdir 0 2285
Create a Report from All Executable Events

To analyze the audit log from an executable's point of view, use the aureport -x command. This command generates a numbered list of all executable events including date, time, name of the executable, the terminal it is run in, the host executing it, the audit ID, and event number.

aureport -x

Executable Report
====================================
# date time exe term host auid event
====================================
1. 13/02/09 15:08:26 /usr/sbin/sshd sshd 192.168.2.100 -1 12
2. 13/02/09 15:08:28 /usr/lib/gdm/gdm-session-worker :0 ? -1 13
3. 13/02/09 15:08:28 /usr/sbin/sshd ssh 192.168.2.100 -1 14
Create a Report about Files

To generate a report from the audit log that focuses on file access, use the aureport -f command. This command generates a numbered list of all file-related events including date, time, name of the accessed file, number of the system call accessing it, success or failure of the command, the executable accessing the file, audit ID, and event number.

aureport -f

File Report
===============================================
# date time file syscall success exe auid event
===============================================
1. 16/02/09 17:45:01 /etc/shadow 2 yes /usr/sbin/cron -1 2279
2. 16/02/09 17:45:02 /tmp/ 83 yes /bin/mktemp 0 2284
3. 16/02/09 17:45:02 /var 83 no /bin/mkdir 0 2285
Create a Report about Users

To generate a report from the audit log that illustrates which users are running what executables on your system, use the aureport -u command. This command generates a numbered list of all user-related events including date, time, audit ID, terminal used, host, name of the executable, and an event ID.

aureport -u

User ID Report
====================================
# date time auid term host exe event
====================================
1. 13/02/09 15:08:26 -1 sshd 192.168.2.100 /usr/sbin/sshd 12
2. 13/02/09 15:08:28 -1 :0 ? /usr/lib/gdm/gdm-session-worker 13
3. 14/02/09 08:25:39 -1 ssh 192.168.2.101 /usr/sbin/sshd 14
Create a Report about Logins

To create a report that focuses on login attempts to your machine, run the aureport -l command. This command generates a numbered list of all login-related events including date, time, audit ID, host and terminal used, name of the executable, success or failure of the attempt, and an event ID.

aureport -l -i

Login Report
============================================
# date time auid host term exe success event
============================================
1. 13/02/09 15:08:31 tux: 192.168.2.100 sshd /usr/sbin/sshd no 19
2. 16/02/09 12:39:05 root: 192.168.2.101 sshd /usr/sbin/sshd no 2108
3. 17/02/09 15:29:07 geeko: ? tty3 /bin/login yes 7809
Limit a Report to a Certain Time Frame

To analyze the logs for a particular time frame, such as only the working hours of Feb 16, 2009, first find out whether this data is contained in the current audit.log or whether the logs have been rotated in by running aureport -t:

aureport -t

Log Time Range Report
=====================
/var/log/audit/audit.log: 03/02/09 14:13:38.225 - 17/02/09 15:30:01.636

The current audit.log contains all the desired data. Otherwise, use the -if option to point the aureport commands to the log file that contains the needed data.

Then, specify the start date and time and the end date and time of the desired time frame and combine it with the report option needed. This example focuses on login attempts:

aureport -ts 02/16/09 8:00 -te 02/16/09 18:00 -l

Login Report
============================================
# date time auid host term exe success event
============================================
1. 16/02/09 12:39:05 root: 192.168.2.100 sshd /usr/sbin/sshd no 2108
2. 16/02/09 12:39:12 0 192.168.2.100 /dev/pts/1 /usr/sbin/sshd yes 2114
3. 16/02/09 13:09:28 root: 192.168.2.100 sshd /usr/sbin/sshd no 2131
4. 16/02/09 13:09:32 root: 192.168.2.100 sshd /usr/sbin/sshd no 2133
5. 16/02/09 13:09:37 0 192.168.2.100 /dev/pts/2 /usr/sbin/sshd yes 2139

The start date and time are specified with the -ts option. Any event that has a time stamp equal to or after your given start time appears in the report. If you omit the date, aureport assumes that you meant today. If you omit the time, it assumes that the start time should be midnight of the date specified.

Specify the end date and time with the -te option. Any event that has a time stamp equal to or before your given event time appears in the report. If you omit the date, aureport assumes that you meant today. If you omit the time, it assumes that the end time should be now. Use the same format for the date and time as for -ts.

All reports except the summary ones are printed in column format and sent to STDOUT, which means that this data can be written to other commands very easily. The visualization scripts introduced in Section 30.8, “Visualizing Audit Data” are examples of how to further process the data generated by audit.

30.6 Querying the Audit Daemon Logs with ausearch

The aureport tool helps you to create overall summaries of what is happening on the system, but if you are interested in the details of a particular event, ausearch is the tool to use.

ausearch allows you to search the audit logs using special keys and search phrases that relate to most of the flags that appear in event messages in /var/log/audit/audit.log. Not all record types contain the same search phrases. There are no hostname or uid entries in a PATH record, for example.

When searching, make sure that you choose appropriate search criteria to catch all records you need. On the other hand, you could be searching for a specific type of record and still get various other related records along with it. This is caused by different parts of the kernel contributing additional records for events that are related to the one to find. For example, you would always get a PATH record along with the SYSCALL record for an open system call.

Tip
Tip: Using Multiple Search Options

Any of the command line options can be combined with logical AND operators to narrow down your search.

Read Audit Logs from Another File

When the audit logs have moved to another machine or when you want to analyze the logs of several machines on your local machine without wanting to connect to each of these individually, move the logs to a local file and have ausearch search them locally:

ausearch - option -if myfile
Convert Numeric Results into Text

Some information, such as user IDs are printed in numeric form. To convert these into human readable text format, add the -i option to your ausearch command.

Search by Audit Event ID

If you have previously run an audit report or done an autrace, you should analyze the trail of a particular event in the log. Most of the report types described in Section 30.5, “Understanding the Audit Logs and Generating Reports” include audit event IDs in their output. An audit event ID is the second part of an audit message ID, which consists of a Unix epoch time stamp and the audit event ID separated by a colon. All events that are logged from one application's system call have the same event ID. Use this event ID with ausearch to retrieve this event's trail from the log.

Use a command similar to the following:

ausearch -a 5207
----
time->Tue Feb 17 13:43:58 2009
type=PATH msg=audit(1234874638.599:5207): item=0 name="/var/log/audit/audit.log" inode=1219041 dev=08:06 mode=0100644 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1234874638.599:5207):  cwd="/root"
type=SYSCALL msg=audit(1234874638.599:5207): arch=c000003e syscall=2 success=yes exit=4 a0=62fb60 a1=0 a2=31 a3=0 items=1 ppid=25400 pid=25616 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=1164 comm="less" exe="/usr/bin/less" key="doc_log"

The ausearch -a command grabs all records in the logs that are related to the audit event ID provided and displays them. This option can be combined with any other option.

Search by Message Type

To search for audit records of a particular message type, use the ausearch -m MESSAGE_TYPE command. Examples of valid message types include PATH, SYSCALL, and USER_LOGIN. Running ausearch -m without a message type displays a list of all message types.

Search by Login ID

To view records associated with a particular login user ID, use the ausearch -ul command. It displays any records related to the user login ID specified provided that user had been able to log in successfully.

Search by User ID

View records related to any of the user IDs (both user ID and effective user ID) with ausearch -ua. View reports related to a particular user ID with ausearch -ui UID. Search for records related to a particular effective user ID, use the ausearch -ue EUID. Searching for a user ID means the user ID of the user creating a process. Searching for an effective user ID means the user ID and privileges that are required to run this process.

Search by Group ID

View records related to any of the group IDs (both group ID and effective group ID) with the ausearch -ga command. View reports related to a particular user ID with ausearch -gi GID. Search for records related to a particular effective group ID, use ausearch -ge EGID.

Search by Command Line Name

View records related to a certain command, using the ausearch -c COMM_NAME command, for example, ausearch -c less for all records related to the less command.

Search by Executable Name

View records related to a certain executable with the ausearch -x EXE command, for example ausearch -x /usr/bin/less for all records related to the /usr/bin/less executable.

Search by System Call Name

View records related to a certain system call with the ausearch -sc SYSCALL command, for example, ausearch -sc open for all records related to the open system call.

Search by Process ID

View records related to a certain process ID with the ausearch -p PID command, for example ausearch -p 13368 for all records related to this process ID.

Search by Event or System Call Success Value

View records containing a certain system call success value with ausearch -sv SUCCESS_VALUE, for example, ausearch -sv yes for all successful system calls.

Search by File Name

View records containing a certain file name with ausearch -f FILE_NAME, for example, ausearch -f /foo/bar for all records related to the /foo/bar file. Using the file name alone would work as well, but using relative paths does not work.

Search by Terminal

View records of events related to a certain terminal only with ausearch -tm TERM, for example, ausearch -tm ssh to view all records related to events on the SSH terminal and ausearch -tm tty to view all events related to the console.

Search by Host Name

View records related to a certain remote host name with ausearch -hn HOSTNAME, for example, ausearch -hn jupiter.example.com. You can use a host name, fully qualified domain name, or numeric network address.

Search by Key Field

View records that contain a certain key assigned in the audit rule set to identify events of a particular type. Use the ausearch -k KEY_FIELD, for example, ausearch -k CFG_etc to display any records containing the CFG_etc key.

Search by Word

View records that contain a certain string assigned in the audit rule set to identify events of a particular type. The whole string will be matched on file name, host name, and terminal. Use the ausearch -w WORD.

Limit a Search to a Certain Time Frame

Use -ts and -te to limit the scope of your searches to a certain time frame. The -ts option is used to specify the start date and time and the -te option is used to specify the end date and time. These options can be combined with any of the above. The use of these options is similar to use with aureport.

30.7 Analyzing Processes with autrace

In addition to monitoring your system using the rules you set up, you can also perform dedicated audits of individual processes using the autrace command. autrace works similarly to the strace command, but gathers slightly different information. The output of autrace is written to /var/log/audit/audit.log and does not look any different from the standard audit log entries.

When performing an autrace on a process, make sure that any audit rules are purged from the queue to avoid these rules clashing with the ones autrace adds itself. Delete the audit rules with the auditctl -D command. This stops all normal auditing.

auditctl -D

No rules

autrace /usr/bin/less

Waiting to execute: /usr/bin/less
Cleaning up...
No rules
Trace complete. You can locate the records with 'ausearch -i -p 7642'

Always use the full path to the executable to track with autrace. After the trace is complete, autrace provides the event ID of the trace, so you can analyze the entire data trail with ausearch. To restore the audit system to use the audit rule set again, restart the audit daemon with systemctl restart auditd.

30.8 Visualizing Audit Data

Neither the data trail in /var/log/audit/audit.log nor the different report types generated by aureport, described in Section 30.5.2, “Generating Custom Audit Reports”, provide an intuitive reading experience to the user. The aureport output is formatted in columns and thus easily available to any sed, Perl, or awk scripts that users might connect to the audit framework to visualize the audit data.

The visualization scripts (see Section 31.6, “Configuring Log Visualization”) are one example of how to use standard Linux tools available with SUSE Linux Enterprise Desktop or any other Linux distribution to create easy-to-read audit output. The following examples help you understand how the plain audit reports can be transformed into human readable graphics.

The first example illustrates the relationship of programs and system calls. To get to this kind of data, you need to determine the appropriate aureport command that delivers the source data from which to generate the final graphic:

aureport -s -i

Syscall Report
=======================================
# date time syscall pid comm auid event
=======================================
1. 16/02/09 17:45:01 open 20343 cron unset 2279
2. 16/02/09 17:45:02 mkdir 20350 mktemp root 2284
3. 16/02/09 17:45:02 mkdir 20351 mkdir root 2285
...

The first thing that the visualization script needs to do on this report is to extract only those columns that are of interest, in this example, the syscall and the comm columns. The output is sorted and duplicates removed then the final output is written into the visualization program itself:

LC_ALL=C aureport -s -i | awk '/^[0-9]/ { print $6" "$4 }' | sort | uniq | mkgraph
Flow Graph—Program versus System Call Relationship
Figure 30.2: Flow Graph—Program versus System Call Relationship

The second example illustrates the different types of events and how many of each type have been logged. The appropriate aureport command to extract this kind of information is aureport -e:

aureport -e -i --summary

Event Summary Report
======================
total  type
======================
2434  SYSCALL
816  USER_START
816  USER_ACCT
814  CRED_ACQ
810  LOGIN
806  CRED_DISP
779  USER_END
99  CONFIG_CHANGE
52  USER_LOGIN

Because this type of report already contains a two column output, it is only fed into the visualization script and transformed into a bar chart.

aureport -e -i --summary  | mkbar events
Bar Chart—Common Event Types
Figure 30.3: Bar Chart—Common Event Types

For background information about the visualization of audit data, refer to the Web site of the audit project at http://people.redhat.com/sgrubb/audit/visualize/index.html.

30.9 Relaying Audit Event Notifications

The auditing system also allows external applications to access and use the auditd daemon in real time. This feature is provided by so called audit dispatcher which allows, for example, intrusion detection systems to use auditd to receive enhanced detection information.

audispd is a daemon which controls the audit dispatcher. It is normally started by auditd. audispd takes audit events and distributes them to the programs which want to analyze them in real time. Configuration of auditd is stored in /etc/audisp/audispd.conf. The file has the following options:

q_depth

Specifies the size of the event dispatcher internal queue. If syslog complains about audit events getting dropped, increase this value. Default is 80.

overflow_action

Specifies the way the audit daemon will react to the internal queue overflow. Possible values are ignore (nothing happens), syslog (issues a warning to syslog), suspend (audispd will stop processing events), single (the computer system will be put in single user mode), or halt (shuts the system down).

priority_boost

Specifies the priority for the audit event dispatcher (in addition to the audit daemon priority itself). Default is 4 which means no change in priority.

name_format

Specifies the way the computer node name is inserted into the audit event. Possible values are none (no computer name is inserted), hostname (name returned by the gethostname system call), fqd (fully qualified domain name of the machine), numeric (IP address of the machine), or user (user defined string from the name option). Default is none.

name

Specifies a user defined string which identifies the machine. The name_format option must be set to user, otherwise this option is ignored.

max_restarts

A non-negative number that tells the audit event dispatcher how many times it can try to restart a crashed plug-in. The default is 10.

Example 30.9: Example /etc/audisp/audispd.conf
  q_depth = 80
  overflow_action = SYSLOG
  priority_boost = 4
  name_format = HOSTNAME
  #name = mydomain

The plug-in programs install their configuration files in a special directory dedicated to audispd plug-ins. It is /etc/audisp/plugins.d by default. The plug-in configuration files have the following options:

active

Specifies if the program will use audispd. Possible values are yes or no.

direction

Specifies the way the plug-in was designed to communicate with audit. It informs the event dispatcher in which directions the events flow. Possible values are in or out.

path

Specifies the absolute path to the plug-in executable. In case of internal plug-ins, this option specifies the plug-in name.

type

Specifies the way the plug-in is to be run. Possible values are builtin or always. Use builtin for internal plug-ins (af_unix and syslog) and always for most (if not all) other plug-ins. Default is always.

args

Specifies the argument that is passed to the plug-in program. Normally, plug-in programs read their arguments from their configuration file and do not need to receive any arguments. There is a limit of 2 arguments.

format

Specifies the format of data that the audit dispatcher passes to the plug-in program. Valid options are binary or string. binary passes the data exactly as the event dispatcher receives them from the audit daemon. string instructs the dispatcher to change the event into a string that is parsable by the audit parsing library. Default is string.

Example 30.10: Example /etc/audisp/plugins.d/syslog.conf
  active = no
  direction = out
  path = builtin_syslog
  type = builtin
  args = LOG_INFO
  format = string

31 Setting Up the Linux Audit Framework

  • Filename: audit_setup.xml
  • ID: cha.audit.setup

This chapter shows how to set up a simple audit scenario. Every step involved in configuring and enabling audit is explained in detail. After you have learned to set up audit, consider a real-world example scenario in Chapter 32, Introducing an Audit Rule Set.

To set up audit on SUSE Linux Enterprise Desktop, you need to complete the following steps:

Procedure 31.1: Setting Up the Linux Audit Framework
  1. Make sure that all required packages are installed: audit, audit-libs, and optionally audit-libs-python. To use the log visualization as described in Section 31.6, “Configuring Log Visualization”, install gnuplot and graphviz from the SUSE Linux Enterprise Desktop media.

  2. Determine the components to audit. Refer to Section 31.1, “Determining the Components to Audit” for details.

  3. Check or modify the basic audit daemon configuration. Refer to Section 31.2, “Configuring the Audit Daemon” for details.

  4. Enable auditing for system calls. Refer to Section 31.3, “Enabling Audit for System Calls” for details.

  5. Compose audit rules to suit your scenario. Refer to Section 31.4, “Setting Up Audit Rules” for details.

  6. Generate logs and configure tailor-made reports. Refer to Section 31.5, “Configuring Audit Reports” for details.

  7. Configure optional log visualization. Refer to Section 31.6, “Configuring Log Visualization” for details.

Important
Important: Controlling the Audit Daemon

Before configuring any of the components of the audit system, make sure that the audit daemon is not running by entering systemctl status auditd as root. On a default SUSE Linux Enterprise Desktop system, audit is started on boot, so you need to turn it off by entering systemctl stop auditd. Start the daemon after configuring it with systemctl start auditd.

31.1 Determining the Components to Audit

Before starting to create your own audit configuration, determine to which degree you want to use it. Check the following general rules to determine which use case best applies to you and your requirements:

31.2 Configuring the Audit Daemon

The basic setup of the audit daemon is done by editing /etc/audit/auditd.conf. You may also use YaST to configure the basic settings by calling YaST › Security and Users › Linux Audit Framework (LAF). Use the tabs Log File and Disk Space for configuration.

log_file = /var/log/audit/audit.log
log_format = RAW
log_group = root
priority_boost = 4
flush = INCREMENTAL
freq = 20
num_logs = 5
disp_qos = lossy
dispatcher = /sbin/audispd
name_format = NONE
##name = mydomain
max_log_file = 6
max_log_file_action = ROTATE
space_left = 75
space_left_action = SYSLOG
action_mail_acct = root
admin_space_left = 50
admin_space_left_action = SUSPEND
disk_full_action = SUSPEND
disk_error_action = SUSPEND
##tcp_listen_port =
tcp_listen_queue = 5
tcp_max_per_addr = 1
##tcp_client_ports = 1024-65535
tcp_client_max_idle = 0
cp_client_max_idle = 0

The default settings work reasonably well for many setups. Some values, such as num_logs, max_log_file, space_left, and admin_space_left depend on the size of your deployment. If disk space is limited, you should reduce the number of log files to keep if they are rotated and you should get an earlier warning if disk space is running out. For a CAPP-compliant setup, adjust the values for log_file, flush, max_log_file, max_log_file_action, space_left, space_left_action, admin_space_left, admin_space_left_action, disk_full_action, and disk_error_action, as described in Section 30.2, “Configuring the Audit Daemon”. An example CAPP-compliant configuration looks like this:

log_file = PATH_TO_SEPARATE_PARTITION/audit.log
log_format = RAW
priority_boost = 4
flush = SYNC                       ### or DATA
freq = 20
num_logs = 4
dispatcher = /sbin/audispd
disp_qos = lossy
max_log_file = 5
max_log_file_action = KEEP_LOGS
space_left = 75
space_left_action = EMAIL
action_mail_acct = root
admin_space_left = 50
admin_space_left_action = SINGLE   ### or HALT
disk_full_action = SUSPEND         ### or HALT
disk_error_action = SUSPEND        ### or HALT

The ### precedes comments where you can choose from several options. Do not add the comments to your actual configuration files.

Tip
Tip: For More Information

Refer to Section 30.2, “Configuring the Audit Daemon” for detailed background information about the auditd.conf configuration parameters.

31.3 Enabling Audit for System Calls

If the audit framework is not installed, install the audit package. A standard SUSE Linux Enterprise Desktop system does not have auditd running by default. Enable it with:

systemctl enable auditd

There are different levels of auditing activity available:

Basic Logging

Out of the box (without any further configuration) auditd logs only events concerning its own configuration changes to /var/log/audit/audit.log. No events (file access, system call, etc.) are generated by the kernel audit component until requested by auditctl. However, other kernel components and modules may log audit events outside of the control of auditctl and these appear in the audit log. By default, the only module that generates audit events is AppArmor.

Advanced Logging with System Call Auditing

To audit system calls and get meaningful file watches, you need to enable audit contexts for system calls.

As you need system call auditing capabilities even when you are configuring plain file or directory watches, you need to enable audit contexts for system calls. To enable audit contexts for the duration of the current session only, execute auditctl -e 1 as root. To disable this feature, execute auditctl -e 0 as root.

The audit contexts are enabled by default. To turn this feature off temporarily, use auditctl -e 0.

31.4 Setting Up Audit Rules

Using audit rules, determine which aspects of the system should be analyzed by audit. Normally this includes important databases and security-relevant configuration files. You may also analyze various system calls in detail if a broad analysis of your system is required. A very detailed example configuration that includes most of the rules that are needed in a CAPP compliant environment is available in Chapter 32, Introducing an Audit Rule Set.

Audit rules can be passed to the audit daemon on the auditctl command line and by composing a rule set in /etc/audit/audit.rules which is processed whenever the audit daemon is started. To customize /etc/audit/audit.rules either edit it directly, or use YaST: Security and Users › Linux Audit Framework (LAF) › Rules for 'auditctl'. Rules passed on the command line are not persistent and need to be re-entered when the audit daemon is restarted.

A simple rule set for very basic auditing on a few important files and directories could look like this:

# basic audit system parameters
-D
-b 8192
-f 1
-e 1

# some file and directory watches with keys
-w /var/log/audit/ -k LOG_audit
-w /etc/audit/auditd.conf -k CFG_audit_conf -p rxwa
-w /etc/audit/audit.rules -k CFG_audit_rules -p rxwa

-w /etc/passwd -k CFG_passwd -p rwxa
-w /etc/sysconfig/ -k CFG_sysconfig

# an example system call rule
-a entry,always -S umask

### add your own rules

When configuring the basic audit system parameters (such as the backlog parameter -b) test these settings with your intended audit rule set to determine whether the backlog size is appropriate for the level of logging activity caused by your audit rule set. If your chosen backlog size is too small, your system might not be able to handle the audit load and consult the failure flag (-f) when the backlog limit is exceeded.

Important
Important: Choosing the Failure Flag

When choosing the failure flag, note that -f 2 tells your system to perform an immediate shutdown without flushing any pending data to disk when the limits of your audit system are exceeded. Because this shutdown is not a clean shutdown, restrict the use of -f 2 to only the most security-conscious environments and use -f 1 (system continues to run, issues a warning and audit stops) for any other setup to avoid loss of data or data corruption.

Directory watches produce less verbose output than separate file watches for the files under these directories. To get detailed logging for your system configuration in /etc/sysconfig, for example, add watches for each file. Audit does not support globbing, which means you cannot create a rule that says -w /etc/* and watches all files and directories below /etc.

For better identification in the log file, a key has been added to each of the file and directory watches. Using the key, it is easier to comb the logs for events related to a certain rule. When creating keys, distinguish between mere log file watches and configuration file watches by using an appropriate prefix with the key, in this case LOG for a log file watch and CFG for a configuration file watch. Using the file name as part of the key also makes it easier for you to identify events of this type in the log file.

Another thing to keep in mind when creating file and directory watches is that audit cannot deal with files that do not exist when the rules are created. Any file that is added to your system while audit is already running is not watched unless you extend the rule set to watch this new file.

For more information about creating custom rules, refer to Section 30.4, “Passing Parameters to the Audit System”.

Important
Important: Changing Audit Rules

After you change audit rules, always restart the audit daemon with systemctl restart auditd to reread the changed rules.

31.5 Configuring Audit Reports

To avoid having to dig through the raw audit logs to get an impression of what your system is currently doing, run custom audit reports at certain intervals. Custom audit reports enable you to focus on areas of interest and get meaningful statistics on the nature and frequency of the events you are monitoring. To analyze individual events in detail, use the ausearch tool.

Before setting up audit reporting, consider the following:

  • What types of events do you want to monitor by generating regular reports? Select the appropriate aureport command lines as described in Section 30.5.2, “Generating Custom Audit Reports”.

  • What do you want to do with the audit reports? Decide whether to create graphical charts from the data accumulated or whether it should be transferred into any sort of spreadsheet or database. Set up the aureport command line and further processing similar to the examples shown in Section 31.6, “Configuring Log Visualization” if you want to visualize your reports.

  • When and at which intervals should the reports run? Set up appropriate automated reporting using cron.

For this example, assume that you are interested in finding out about any attempts to access your audit, PAM, and system configuration. Proceed as follows to find out about file events on your system:

  1. Generate a full summary report of all events and check for any anomalies in the summary report, for example, have a look at the failed syscalls record, because these might have failed because of insufficient permissions to access a file or a file not being there:

    aureport
    
    Summary Report
    ======================
    Range of time in logs: 03/02/09 14:13:38.225 - 17/02/09 16:30:10.352
    Selected time for report: 03/02/09 14:13:38 - 17/02/09 16:30:10.352
    Number of changes in configuration: 24
    Number of changes to accounts, groups, or roles: 0
    Number of logins: 9
    Number of failed logins: 15
    Number of authentications: 19
    Number of failed authentications: 578
    Number of users: 3
    Number of terminals: 15
    Number of host names: 4
    Number of executables: 20
    Number of files: 279
    Number of AVC's: 0
    Number of MAC events: 0
    Number of failed syscalls: 994
    Number of anomaly events: 0
    Number of responses to anomaly events: 0
    Number of crypto events: 0
    Number of keys: 2
    Number of process IDs: 1238
    Number of events: 5435
  2. Run a summary report for failed events and check the files record for the number of failed file access events:

    aureport --failed
    
    Failed Summary Report
    ======================
    Range of time in logs: 03/02/09 14:13:38.225 - 17/02/09 16:30:10.352
    Selected time for report: 03/02/09 14:13:38 - 17/02/09 16:30:10.352
    Number of changes in configuration: 0
    Number of changes to accounts, groups, or roles: 0
    Number of logins: 0
    Number of failed logins: 15
    Number of authentications: 0
    Number of failed authentications: 578
    Number of users: 1
    Number of terminals: 7
    Number of host names: 4
    Number of executables: 12
    Number of files: 77
    Number of AVC's: 0
    Number of MAC events: 0
    Number of failed syscalls: 994
    Number of anomaly events: 0
    Number of responses to anomaly events: 0
    Number of crypto events: 0
    Number of keys: 2
    Number of process IDs: 713
    Number of events: 1589
  3. To list the files that could not be accessed, run a summary report of failed file events:

    aureport -f -i --failed --summary
    
    Failed File Summary Report
    ===========================
    total  file
    ===========================
    80  /var
    80  spool
    80  cron
    80  lastrun
    46  /usr/lib/locale/en_GB.UTF-8/LC_CTYPE
    45  /usr/lib/locale/locale-archive
    38  /usr/lib/locale/en_GB.UTF-8/LC_IDENTIFICATION
    38  /usr/lib/locale/en_GB.UTF-8/LC_MEASUREMENT
    38  /usr/lib/locale/en_GB.UTF-8/LC_TELEPHONE
    38  /usr/lib/locale/en_GB.UTF-8/LC_ADDRESS
    38  /usr/lib/locale/en_GB.UTF-8/LC_NAME
    38  /usr/lib/locale/en_GB.UTF-8/LC_PAPER
    38  /usr/lib/locale/en_GB.UTF-8/LC_MESSAGES
    38  /usr/lib/locale/en_GB.UTF-8/LC_MONETARY
    38  /usr/lib/locale/en_GB.UTF-8/LC_COLLATE
    38  /usr/lib/locale/en_GB.UTF-8/LC_TIME
    38  /usr/lib/locale/en_GB.UTF-8/LC_NUMERIC
    8  /etc/magic.mgc
    ...

    To focus this summary report on a few files or directories of interest only, such as /etc/audit/auditd.conf, /etc/pam.d, and /etc/sysconfig, use a command similar to the following:

    aureport -f -i --failed --summary |grep -e "/etc/audit/auditd.conf" -e "/etc/pam.d/" -e "/etc/sysconfig"
    
    1  /etc/sysconfig/displaymanager
  4. From the summary report, then proceed to isolate these items of interest from the log and find out their event IDs for further analysis:

    aureport -f -i --failed |grep -e "/etc/audit/auditd.conf" -e "/etc/pam.d/" -e "/etc/sysconfig"
    
    993. 17/02/09 16:47:34 /etc/sysconfig/displaymanager readlink no /bin/vim-normal root 7887
    994. 17/02/09 16:48:23 /etc/sysconfig/displaymanager getxattr no /bin/vim-normal root 7889
  5. Use the event ID to get a detailed record for each item of interest:

    ausearch -a 7887 -i
    ----
    time->Tue Feb 17 16:48:23 2009
    type=PATH msg=audit(1234885703.090:7889): item=0 name="/etc/sysconfig/displaymanager" inode=369282 dev=08:06 mode=0100644 ouid=0 ogid=0 rdev=00:00
    type=CWD msg=audit(1234885703.090:7889):  cwd="/root"
    type=SYSCALL msg=audit(1234885703.090:7889): arch=c000003e syscall=191 success=no exit=-61 a0=7e1e20 a1=7f90e4cf9187 a2=7fffed5b57d0 a3=84 items=1 ppid=25548 pid=23045 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=1166 comm="vim" exe="/bin/vim-normal" key=(null)
Tip
Tip: Focusing on a Certain Time Frame

If you are interested in events during a particular period of time, trim down the reports by using start and end dates and times with your aureport commands (-ts and -te). For more information, refer to Section 30.5.2, “Generating Custom Audit Reports”.

All steps (except for the last one) can be run automatically and would easily be scriptable and configured as cron jobs. Any of the --failed --summary reports could be transformed easily into a bar chart that plots files versus failed access attempts. For more information about visualizing audit report data, refer to Section 31.6, “Configuring Log Visualization”.

31.6 Configuring Log Visualization

Using the scripts mkbar and mkgraph you can illustrate your audit statistics with various graphs and charts. As with any other aureport command, the plotting commands are scriptable and can easily be configured to run as cron jobs.

mkbar and mkgraph were created by Steve Grubb at Red Hat. They are available from http://people.redhat.com/sgrubb/audit/visualize/. Because the current version of audit in SUSE Linux Enterprise Desktop does not ship with these scripts, proceed as follows to make them available on your system:

Warning
Warning: Downloaded Content Is Dangerous

Use mkbar and mkgraph at your own risk. Any content downloaded from the Web is potentially dangerous to your system, even more so when run with root privileges.

  1. Download the scripts to root's ~/bin directory:

    wget http://people.redhat.com/sgrubb/audit/visualize/mkbar -O ~/bin/mkbar
    wget http://people.redhat.com/sgrubb/audit/visualize/mkgraph -O ~/bin/mkgraph
  2. Adjust the file permissions to read, write, and execute for root:

    chmod 744 ~/bin/mk{bar,graph}

To plot summary reports, such as the ones discussed in Section 31.5, “Configuring Audit Reports”, use the script mkbar. Some example commands could look like the following:

Create a Summary of Events
aureport -e -i --summary | mkbar events
Create a Summary of File Events
aureport -f -i --summary | mkbar files
Create a Summary of Login Events
aureport -l -i --summary | mkbar login
Create a Summary of User Events
aureport -u -i --summary | mkbar users
Create a Summary of System Call Events
aureport -s -i --summary | mkbar syscalls

To create a summary chart of failed events of any of the above event types, add the --failed option to the respective aureport command. To cover a certain period of time only, use the -ts and -te options on aureport. Any of these commands can be tweaked further by narrowing down its scope using grep or egrep and regular expressions. See the comments in the mkbar script for an example. Any of the above commands produces a PNG file containing a bar chart of the requested data.

To illustrate the relationship between different kinds of audit objects, such as users and system calls, use the script mkgraph. Some example commands could look like the following:

Users versus Executables
LC_ALL=C aureport -u -i | awk '/^[0-9]/ { print $4" "$7 }' | sort | uniq | mkgraph users_vs_exec
Users versus Files
LC_ALL=C aureport -f -i | awk '/^[0-9]/ { print $8" "$4 }' | sort | uniq | mkgraph users_vs_files
System Calls versus Commands
LC_ALL=C aureport -s -i | awk '/^[0-9]/ { print $4" "$6 }' | sort | uniq | mkgraph syscall_vs_com
System Calls versus Files
LC_ALL=C aureport -s -i | awk '/^[0-9]/ { print $5" "$4 }' | sort | uniq | mkgraph | syscall_vs_file

Graphs can also be combined to illustrate complex relationships. See the comments in the mkgraph script for further information and an example. The graphs produced by this script are created in PostScript format by default, but you can change the output format by changing the EXT variable in the script from ps to png or jpg.

32 Introducing an Audit Rule Set

  • Filename: audit_scenarios.xml
  • ID: cha.audit.scenarios

The following example configuration illustrates how audit can be used to monitor your system. It highlights the most important items that need to be audited to cover the list of auditable events specified by Controlled Access Protection Profile (CAPP).

The example rule set is divided into the following sections:

To transform this example into a configuration file to use in your live setup, proceed as follows:

  1. Choose the appropriate settings for your setup and adjust them.

  2. Adjust the file /etc/audit/audit.rules by adding rules from the examples below or by modifying existing rules.

Note
Note: Adjusting the Level of Audit Logging

Do not copy the example below into your audit setup without adjusting it to your needs. Determine what and to what extent to audit.

The entire audit.rules is a collection of auditctl commands. Every line in this file expands to a full auditctl command line. The syntax used in the rule set is the same as that of the auditctl command.

32.1 Adding Basic Audit Configuration Parameters

-D1
-b 81922
-f 23

1

Delete any preexisting rules before starting to define new ones.

2

Set the number of buffers to take the audit messages. Depending on the level of audit logging on your system, increase or decrease this figure.

3

Set the failure flag to use when the kernel needs to handle critical errors. Possible values are 0 (silent), 1 (printk, print a failure message), and 2 (panic, halt the system).

By emptying the rule queue with the -D option, you make sure that audit does not use any other rule set than what you are offering it by means of this file. Choosing an appropriate buffer number (-b) is vital to avoid having your system fail because of too high an audit load. Choosing the panic failure flag -f 2 ensures that your audit records are complete even if the system is encountering critical errors. By shutting down the system on a critical error, audit makes sure that no process escapes from its control as it otherwise might if level 1 (printk) were chosen.

Important
Important: Choosing the Failure Flag

Before using your audit rule set on a live system, make sure that the setup has been thoroughly evaluated on test systems using the worst case production workload. It is even more critical that you do this when specifying the -f 2 flag, because this instructs the kernel to panic (perform an immediate halt without flushing pending data to disk) if any thresholds are exceeded. Consider the use of the -f 2 flag for only the most security-conscious environments.

32.2 Adding Watches on Audit Log Files and Configuration Files

Adding watches on your audit configuration files and the log files themselves ensures that you can track any attempt to tamper with the configuration files or detect any attempted accesses to the log files.

Note
Note: Creating Directory and File Watches

Creating watches on a directory is not necessarily sufficient if you need events for file access. Events on directory access are only triggered when the directory's inode is updated with metadata changes. To trigger events on file access, add watches for each file to monitor.

-w /var/log/audit/ 1
-w /var/log/audit/audit.log

-w /var/log/audit/audit_log.1
-w /var/log/audit/audit_log.2
-w /var/log/audit/audit_log.3
-w /var/log/audit/audit_log.4

-w /etc/audit/auditd.conf -p wa2
-w /etc/audit/audit.rules -p wa
-w /etc/libaudit.conf -p wa

1

Set a watch on the directory where the audit log is located. Trigger an event for any type of access attempt to this directory. If you are using log rotation, add watches for the rotated logs as well.

2

Set a watch on an audit configuration file. Log all write and attribute change attempts to this file.

32.3 Monitoring File System Objects

Auditing system calls helps track your system's activity well beyond the application level. By tracking file system–related system calls, get an idea of how your applications are using these system calls and determine whether that use is appropriate. By tracking mount and unmount operations, track the use of external resources (removable media, remote file systems, etc.).

Important
Important: Auditing System Calls

Auditing system calls results in a high logging activity. This activity, in turn, puts a heavy load on the kernel. With a kernel less responsive than usual, the system's backlog and rate limits might be exceeded. Carefully evaluate which system calls to include in your audit rule set and adjust the log settings accordingly. See Section 30.2, “Configuring the Audit Daemon” for details on how to tweak the relevant settings.

-a entry,always -S chmod -S fchmod -S chown -S chown32 -S fchown -S fchown32 -S lchown -S lchown321

-a entry,always -S creat -S open -S truncate -S truncate64 -S ftruncate -S ftruncate642

-a entry,always -S mkdir -S rmdir3

-a entry,always -S unlink -S rename -S link -S symlink4

-a entry,always -S setxattr5
-a entry,always -S lsetxattr
-a entry,always -S fsetxattr
-a entry,always -S removexattr
-a entry,always -S lremovexattr
-a entry,always -S fremovexattr

-a entry,always -S mknod6

-a entry,always -S mount -S umount -S umount27

1

Enable an audit context for system calls related to changing file ownership and permissions. Depending on the hardware architecture of your system, enable or disable the *32 rules. 64-bit systems, like AMD64/Intel 64, require the *32 rules to be removed.

2

Enable an audit context for system calls related to file content modification. Depending on the hardware architecture of your system, enable or disable the *64 rules. 64-bit systems, like AMD64/Intel 64, require the *64 rules to be removed.

3

Enable an audit context for any directory operation, like creating or removing a directory.

4

Enable an audit context for any linking operation, such as creating a symbolic link, creating a link, unlinking, or renaming.

5

Enable an audit context for any operation related to extended file system attributes.

6

Enable an audit context for the mknod system call, which creates special (device) files.

7

Enable an audit context for any mount or umount operation. For the x86 architecture, disable the umount rule. For the Intel 64 architecture, disable the umount2 rule.

32.4 Monitoring Security Configuration Files and Databases

To make sure that your system is not made to do undesired things, track any attempts to change the cron and at configurations or the lists of scheduled jobs. Tracking any write access to the user, group, password and login databases and logs helps you identify any attempts to manipulate your system's user database.

Tracking changes to your system configuration (kernel, services, time, etc.) helps you spot any attempts of others to manipulate essential functionality of your system. Changes to the PAM configuration should also be monitored in a secure environment, because changes in the authentication stack should not be made by anyone other than the administrator, and it should be logged which applications are using PAM and how it is used. The same applies to any other configuration files related to secure authentication and communication.

1
-w /var/spool/atspool
-w /etc/at.allow
-w /etc/at.deny

-w /etc/cron.allow -p wa
-w /etc/cron.deny -p wa
-w /etc/cron.d/ -p wa
-w /etc/cron.daily/ -p wa
-w /etc/cron.hourly/ -p wa
-w /etc/cron.monthly/ -p wa
-w /etc/cron.weekly/ -p wa
-w /etc/crontab -p wa
-w /var/spool/cron/root

2
-w /etc/group -p wa
-w /etc/passwd -p wa
-w /etc/shadow

-w /etc/login.defs -p wa
-w /etc/securetty
-w /var/log/lastlog

3
-w /etc/hosts -p wa
-w /etc/sysconfig/
w /etc/init.d/
w /etc/ld.so.conf -p wa
w /etc/localtime -p wa
w /etc/sysctl.conf -p wa
w /etc/modprobe.d/
w /etc/modprobe.conf.local -p wa
w /etc/modprobe.conf -p wa
4
w /etc/pam.d/
5
-w /etc/aliases -p wa
-w /etc/postfix/ -p wa

6
-w /etc/ssh/sshd_config

-w /etc/stunnel/stunnel.conf
-w /etc/stunnel/stunnel.pem

-w /etc/vsftpd.ftpusers
-w /etc/vsftpd.conf

7
-a exit,always -S sethostname
-w /etc/issue -p wa
-w /etc/issue.net -p wa

1

Set watches on the at and cron configuration and the scheduled jobs and assign labels to these events.

2

Set watches on the user, group, password, and login databases and logs and set labels to better identify any login-related events, such as failed login attempts.

3

Set a watch and a label on the static host name configuration in /etc/hosts. Track changes to the system configuration directory, /etc/sysconfig. Enable per-file watches if you are interested in file events. Set watches and labels for changes to the boot configuration in the /etc/init.d directory. Enable per-file watches if you are interested in file events. Set watches and labels for any changes to the linker configuration in /etc/ld.so.conf. Set watches and a label for /etc/localtime. Set watches and labels for the kernel configuration files /etc/sysctl.conf, /etc/modprobe.d/, /etc/modprobe.conf.local, and /etc/modprobe.conf.

4

Set watches on the PAM configuration directory. If you are interested in particular files below the directory level, add explicit watches to these files as well.

5

Set watches to the postfix configuration to log any write attempt or attribute change and use labels for better tracking in the logs.

6

Set watches and labels on the SSH, stunnel, and vsftpd configuration files.

7

Perform an audit of the sethostname system call and set watches and labels on the system identification configuration in /etc/issue and /etc/issue.net.

32.5 Monitoring Miscellaneous System Calls

Apart from auditing file system related system calls, as described in Section 32.3, “Monitoring File System Objects”, you can also track various other system calls. Tracking task creation helps you understand your applications' behavior. Auditing the umask system call lets you track how processes modify creation mask. Tracking any attempts to change the system time helps you identify anyone or any process trying to manipulate the system time.

1
-a entry,always -S clone -S fork -S vfork

2
-a entry,always -S umask

3
-a entry,always -S adjtimex -S settimeofday

1

Track task creation.

2

Add an audit context to the umask system call.

3

Track attempts to change the system time. adjtimex can be used to skew the time. settimeofday sets the absolute time.

32.6 Filtering System Call Arguments

In addition to the system call auditing introduced in Section 32.3, “Monitoring File System Objects” and Section 32.5, “Monitoring Miscellaneous System Calls”, you can track application behavior to an even higher degree. Applying filters helps you focus audit on areas of primary interest to you. This section introduces filtering system call arguments for non-multiplexed system calls like access and for multiplexed ones like socketcall or ipc. Whether system calls are multiplexed depends on the hardware architecture used. Both socketcall and ipc are not multiplexed on 64-bit architectures, such as AMD64/Intel 64.

Important
Important: Auditing System Calls

Auditing system calls results in high logging activity, which in turn puts a heavy load on the kernel. With a kernel less responsive than usual, the system's backlog and rate limits might well be exceeded. Carefully evaluate which system calls to include in your audit rule set and adjust the log settings accordingly. See Section 30.2, “Configuring the Audit Daemon” for details on how to tweak the relevant settings.

The access system call checks whether a process would be allowed to read, write or test for the existence of a file or file system object. Using the -F filter flag, build rules matching specific access calls in the format-F a1=ACCESS_MODE. Check /usr/include/fcntl.h for a list of possible arguments to the access system call.

-a entry,always -S access -F a1=41
-a entry,always -S access -F a1=62
-a entry,always -S access -F a1=73

1

Audit the access system call, but only if the second argument of the system call (mode) is 4 (R_OK). This rule filters for all access calls testing for sufficient read permissions to a file or file system object accessed by a user or process.

2

Audit the access system call, but only if the second argument of the system call (mode) is 6, meaning 4 OR 2, which translates to R_OK OR W_OK. This rule filters for access calls testing for sufficient read and write permissions.

3

Audit the access system call, but only if the second argument of the system call (mode) is 7, meaning 4 OR 2 OR 1, which translates to R_OK OR W_OK OR X_OK. This rule filters for access calls testing for sufficient read, write, and execute permissions.

The socketcall system call is a multiplexed system call. Multiplexed means that there is only one system call for all possible calls and that libc passes the actual system call to use as the first argument (a0). Check the manual page of socketcall for possible system calls and refer to /usr/src/linux/include/linux/net.h for a list of possible argument values and system call names. Audit supports filtering for specific system calls using a -F a0=SYSCALL_NUMBER.

-a entry,always -S socketcall -F a0=1 -F a1=101
## Use this line on x86_64, ia64 instead
#-a entry,always -S socket -F a0=10

-a entry,always -S socketcall -F a0=52
## Use this line on x86_64, ia64 instead
#-a entry, always -S accept

1

Audit the socket(PF_INET6) system call. The -F a0=1 filter matches all socket system calls and the -F a1=10 filter narrows the matches down to socket system calls carrying the IPv6 protocol family domain parameter (PF_INET6). Check /usr/include/linux/net.h for the first argument (a0) and /usr/src/linux/include/linux/socket.h for the second parameter (a1). 64-bit platforms, like AMD64/Intel 64, do not use multiplexing on socketcall system calls. For these platforms, comment the rule and add the plain system call rules with a filter on PF_INET6.

2

Audit the socketcall system call. The filter flag is set to filter for a0=5 as the first argument to socketcall, which translates to the accept system call if you check /usr/include/linux/net.h. 64-bit platforms, like AMD64/Intel 64, do not use multiplexing on socketcall system calls. For these platforms, comment the rule and add the plain system call rule without argument filtering.

The ipc system call is another example of multiplexed system calls. The actual call to invoke is determined by the first argument passed to the ipc system call. Filtering for these arguments helps you focus on those IPC calls of interest to you. Check /usr/include/linux/ipc.h for possible argument values.

1
## msgctl
-a entry,always -S ipc -F a0=14
## msgget
-a entry,always -S ipc -F a0=13
## Use these lines on x86_64, ia64 instead
#-a entry,always -S msgctl
#-a entry,always -S msgget

2
## semctl
-a entry,always -S ipc -F a0=3
## semget
-a entry,always -S ipc -F a0=2
## semop
-a entry,always -S ipc -F a0=1
## semtimedop
-a entry,always -S ipc -F a0=4
## Use these lines on x86_64, ia64 instead
#-a entry,always -S semctl
#-a entry,always -S semget
#-a entry,always -S semop
#-a entry,always -S semtimedop

3
## shmctl
-a entry,always -S ipc -F a0=24
## shmget
-a entry,always -S ipc -F a0=23
## Use these lines on x86_64, ia64 instead
#-a entry,always -S shmctl
#-a entry,always -S shmget

1

Audit system calls related to IPC SYSV message queues. In this case, the a0 values specify that auditing is added for the msgctl and msgget system calls (14 and 13). 64-bit platforms, like AMD64/Intel 64, do not use multiplexing on ipc system calls. For these platforms, comment the first two rules and add the plain system call rules without argument filtering.

2

Audit system calls related to IPC SYSV message semaphores. In this case, the a0 values specify that auditing is added for the semctl, semget, semop, and semtimedop system calls (3, 2, 1, and 4). 64-bit platforms, like AMD64/Intel 64, do not use multiplexing on ipc system calls. For these platforms, comment the first four rules and add the plain system call rules without argument filtering.

3

Audit system calls related to IPC SYSV shared memory. In this case, the a0 values specify that auditing is added for the shmctl and shmget system calls (24, 23). 64-bit platforms, like AMD64/Intel 64, do not use multiplexing on ipc system calls. For these platforms, comment the first two rules and add the plain system call rules without argument filtering.

32.7 Managing Audit Event Records Using Keys

After configuring a few rules generating events and populating the logs, you need to find a way to tell one event from the other. Using the ausearch command, you can filter the logs for various criteria. Using ausearch -m MESSAGE_TYPE, you can at least filter for events of a certain type. However, to be able to filter for events related to a particular rule, you need to add a key to this rule in the /etc/audit/audit.rules file. This key is then added to the event record every time the rule logs an event. To retrieve these log entries, simply run ausearch -k YOUR_KEY to get a list of records related to the rule carrying this particular key.

As an example, assume you have added the following rule to your rule file:

-w /etc/audit/audit.rules -p wa

Without a key assigned to it, you would probably need to filter for SYSCALL or PATH events then use grep or similar tools to isolate any events related to the above rule. Now, add a key to the above rule, using the -k option:

-w /etc/audit/audit.rules -p wa -k CFG_audit.rules

You can specify any text string as key. Distinguish watches related to different types of files (configuration files or log files) from one another using different key prefixes (CFG, LOG, etc.) followed by the file name. Finding any records related to the above rule now comes down to the following:

ausearch -k CFG_audit.rules
----
time->Thu Feb 19 09:09:54 2009
type=PATH msg=audit(1235030994.032:8649): item=3 name="audit.rules~" inode=370603 dev=08:06 mode=0100640 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1235030994.032:8649): item=2 name="audit.rules" inode=370603 dev=08:06 mode=0100640 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1235030994.032:8649): item=1  name="/etc/audit" inode=368599 dev=08:06 mode=040750 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1235030994.032:8649): item=0  name="/etc/audit" inode=368599 dev=08:06 mode=040750 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1235030994.032:8649):  cwd="/etc/audit"
type=SYSCALL msg=audit(1235030994.032:8649): arch=c000003e syscall=82 success=yes exit=0 a0=7deeb0 a1=883b30 a2=2 a3=ffffffffffffffff items=4 ppid=25400 pid=32619 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=1164 comm="vim" exe="/bin/vim-normal" key="CFG_audit.rules"

33 Useful Resources

  • Filename: audit_moreinfo.xml
  • ID: cha.audit.moreinfo

There are other resources available containing valuable information about the Linux audit framework:

The Audit Manual Pages

There are several man pages installed along with the audit tools that provide valuable and very detailed information:

auditd(8)

The Linux audit daemon

auditd.conf(5)

The Linux audit daemon configuration file

auditctl(8)

A utility to assist controlling the kernel's audit system

autrace(8)

A program similar to strace

ausearch(8)

A tool to query audit daemon logs

aureport(8)

A tool that produces summary reports of audit daemon logs

audispd.conf(5)

The audit event dispatcher configuration file

audispd(8)

The audit event dispatcher daemon talking to plug-in programs.

http://people.redhat.com/sgrubb/audit/index.html

The home page of the Linux audit project. This site contains several specifications relating to different aspects of Linux audit, and a short FAQ.

/usr/share/doc/packages/audit

The audit package itself contains a README with basic design information and sample .rules files for different scenarios:

capp.rules: Controlled Access Protection Profile (CAPP)
lspp.rules: Labeled Security Protection Profile (LSPP)
nispom.rules: National Industrial Security Program Operating Manual Chapter 8(NISPOM)
stig.rules: Secure Technical Implementation Guide (STIG)
http://www.commoncriteriaportal.org/

The official Web site of the Common Criteria project. Learn all about the Common Criteria security certification initiative and which role audit plays in this framework.

A Documentation Updates

  • Filename: security_docupdates.xml
  • ID: app.security.docupdates

This chapter lists content changes for this document.

This manual was updated on the following dates:

A.1 September 2017 (Initial Release of SUSE Linux Enterprise Desktop 12 SP3)

General
Part I, “Authentication”
Chapter 14, SSH: Secure Network Operations
Chapter 15, Masquerading and Firewalls

A.2 November 2016 (Initial Release of SUSE Linux Enterprise Desktop 12 SP2)

General
  • The e-mail address for documentation feedback has changed to doc-team@suse.com.

  • The documentation for Docker has been enhanced and renamed to Docker Guide.

A.3 March 2016 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP1)

A.4 December 2015 (Initial Release of SUSE Linux Enterprise Desktop 12 SP1)

General
  • SMT Guide is now part of the documentation for SUSE Linux Enterprise Desktop.

  • Add-ons provided by SUSE have been renamed as modules and extensions. The manuals have been updated to reflect this change.

  • Numerous small fixes and additions to the documentation, based on technical feedback.

  • The registration service has been changed from Novell Customer Center to SUSE Customer Center.

  • In YaST, you will now reach Network Settings via the System group. Network Devices is gone (https://bugzilla.suse.com/show_bug.cgi?id=867809).

Chapter 4, Setting Up Authentication Servers and Clients Using YaST

Updated the chapter to reflect new GUI improvements for Kerberos/LDAP client (Fate #316349).

Chapter 8, Configuring Security Settings with YaST

Updated chapter because of systemd-related changes (Fate #318425).

Chapter 15, Masquerading and Firewalls
Bugfixes

A.6 October 2014 (Initial Release of SUSE Linux Enterprise Desktop 12)

General
  • Removed all KDE documentation and references because KDE is no longer shipped.

  • Removed all references to SuSEconfig, which is no longer supported (Fate #100011).

  • Move from System V init to systemd (Fate #310421). Updated affected parts of the documentation.

  • YaST Runlevel Editor has changed to Services Manager (Fate #312568). Updated affected parts of the documentation.

  • Removed all references to ISDN support, as ISDN support has been removed (Fate #314594).

  • Removed all references to the YaST DSL module as it is no longer shipped (Fate #316264).

  • Removed all references to the YaST Modem module as it is no longer shipped (Fate #316264).

  • Btrfs has become the default file system for the root partition (Fate #315901). Updated affected parts of the documentation.

  • The dmesg now provides human-readable time stamps in ctime()-like format (Fate #316056). Updated affected parts of the documentation.

  • syslog and syslog-ng have been replaced by rsyslog (Fate #316175). Updated affected parts of the documentation.

  • MariaDB is now shipped as the relational database instead of MySQL (Fate #313595). Updated affected parts of the documentation.

  • SUSE-related products are no longer available from http://download.novell.com but from http://download.suse.com. Adjusted links accordingly.

  • Novell Customer Center has been replaced with SUSE Customer Center. Updated affected parts of the documentation.

  • /var/run is mounted as tmpfs (Fate #303793). Updated affected parts of the documentation.

  • The following architectures are no longer supported: IA64 and x86. Updated affected parts of the documentation.

  • The traditional method for setting up the network with ifconfig has been replaced by wicked. Updated affected parts of the documentation.

  • A lot of networking commands are deprecated and have been replaced by newer commands (usually ip). Updated affected parts of the documentation.

    arp: ip neighbor
    ifconfig: ip addr, ip link
    iptunnel: ip tunnel
    iwconfig: iw
    nameif: ip link, ifrename
    netstat: ss, ip route, ip -s link, ip maddr
    route: ip route
  • Numerous small fixes and additions to the documentation, based on technical feedback.

Chapter 2, Authentication with PAM

The pam_pwcheck module has been replaced with pam_cracklib and pam_pwhistory. Updated chapter to reflect this change.

Chapter 4, Setting Up Authentication Servers and Clients Using YaST

Added a chapter about the new YaST authentication module for Kerberos and LDAP (Fate #316349). The chapter consists of two parts: Section 4.1, “Configuring an Authentication Server” and Section 4.2, “Configuring an Authentication Client with YaST” (Fate #308902).

Chapter 5, LDAP—A Directory Service

Updated chapter to reflect the changes in YaST regarding authentication setup (Fate #316349).

Chapter 6, Network Authentication with Kerberos

Updated chapter to reflect the changes in YaST regarding authentication setup (Fate #316349).

Chapter 9, Authorization with PolKit

Updated chapter to reflect major software updates.

Chapter 14, SSH: Secure Network Operations
Chapter 17, Managing X.509 Certification

The YaST CA module now allows to export key and certificate into different files. See Section 17.2.5, “Changing Default Values” (Fate #305490).

Part IV, “Confining Privileges with AppArmor
Part V, “The Linux Audit Framework

Numerous small fixes and additions, based on technical feedback.

Obsolete Content
Bugfixes
SUSE Linux Enterprise Desktop 12 SP3

System Analysis and Tuning Guide

An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.

Publication Date: May 07, 2018
About This Guide
Available Documentation
Feedback
Documentation Conventions
I Basics
1 General Notes on System Tuning
1.1 Be Sure What Problem to Solve
1.2 Rule Out Common Problems
1.3 Finding the Bottleneck
1.4 Step-by-step Tuning
II System Monitoring
2 System Monitoring Utilities
2.1 Multi-Purpose Tools
2.2 System Information
2.3 Processes
2.4 Memory
2.5 Networking
2.6 The /proc File System
2.7 Hardware Information
2.8 Files and File Systems
2.9 User Information
2.10 Time and Date
2.11 Graph Your Data: RRDtool
3 Analyzing and Managing System Log Files
3.1 System Log Files in /var/log/
3.2 Viewing and Parsing Log Files
3.3 Managing Log Files with logrotate
3.4 Monitoring Log Files with logwatch
3.5 Using logger to Make System Log Entries
III Kernel Monitoring
4 SystemTap—Filtering and Analyzing System Data
4.1 Conceptual Overview
4.2 Installation and Setup
4.3 Script Syntax
4.4 Example Script
4.5 User Space Probing
4.6 For More Information
5 Kernel Probes
5.1 Supported Architectures
5.2 Types of Kernel Probes
5.3 Kprobes API
5.4 debugfs Interface
5.5 For More Information
6 Hardware-Based Performance Monitoring with Perf
6.1 Hardware-Based Monitoring
6.2 Sampling and Counting
6.3 Installing Perf
6.4 Perf Subcommands
6.5 Counting Particular Types of Event
6.6 Recording Events Specific to Particular Commands
6.7 For More Information
7 OProfile—System-Wide Profiler
7.1 Conceptual Overview
7.2 Installation and Requirements
7.3 Available OProfile Utilities
7.4 Using OProfile
7.5 Using OProfile's GUI
7.6 Generating Reports
7.7 For More Information
IV Resource Management
8 General System Resource Management
8.1 Planning the Installation
8.2 Disabling Unnecessary Services
8.3 File Systems and Disk Access
9 Kernel Control Groups
9.1 Technical Overview and Definitions
9.2 Scenario
9.3 Control Group Subsystems
9.4 Using Controller Groups
9.5 For More Information
10 Automatic Non-Uniform Memory Access (NUMA) Balancing
10.1 Implementation
10.2 Configuration
10.3 Monitoring
10.4 Impact
11 Power Management
11.1 Power Management at CPU Level
11.2 In-Kernel Governors
11.3 The cpupower Tools
11.4 Monitoring Power Consumption with powerTOP
11.5 Special Tuning Options
11.6 Troubleshooting
11.7 For More Information
V Kernel Tuning
12 Tuning I/O Performance
12.1 Switching I/O Scheduling
12.2 Available I/O Elevators
12.3 I/O Barrier Tuning
12.4 Enable blk-mq I/O Path for SCSI by Default
13 Tuning the Task Scheduler
13.1 Introduction
13.2 Process Classification
13.3 Completely Fair Scheduler
13.4 For More Information
14 Tuning the Memory Management Subsystem
14.1 Memory Usage
14.2 Reducing Memory Usage
14.3 Virtual Memory Manager (VM) Tunable Parameters
14.4 Monitoring VM Behavior
15 Tuning the Network
15.1 Configurable Kernel Socket Buffers
15.2 Detecting Network Bottlenecks and Analyzing Network Traffic
15.3 Netfilter
15.4 Improving the Network Performance with Receive Packet Steering (RPS)
15.5 For More Information
VI Handling System Dumps
16 Tracing Tools
16.1 Tracing System Calls with strace
16.2 Tracing Library Calls with ltrace
16.3 Debugging and Profiling with Valgrind
16.4 For More Information
17 Kexec and Kdump
17.1 Introduction
17.2 Required Packages
17.3 Kexec Internals
17.4 Calculating crashkernel Allocation Size
17.5 Basic Kexec Usage
17.6 How to Configure Kexec for Routine Reboots
17.7 Basic Kdump Configuration
17.8 Analyzing the Crash Dump
17.9 Advanced Kdump Configuration
17.10 For More Information
VII Synchronized Clocks with Precision Time Protocol
18 Precision Time Protocol
18.1 Introduction to PTP
18.2 Using PTP
18.3 Synchronizing the Clocks with phc2sys
18.4 Examples of Configurations
18.5 PTP and NTP
A Documentation Updates
A.1 December 2017 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP3)
A.2 September 2017 (Initial Release of SUSE Linux Enterprise Desktop 12 SP3)
A.3 November 2016 (Initial Release of SUSE Linux Enterprise Desktop 12 SP2)
A.4 March 2016 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP1)
A.5 December 2015 (Initial Release of SUSE Linux Enterprise Desktop 12 SP1)
A.6 February 2015 (Documentation Maintenance Update)
A.7 October 2014 (Initial Release of SUSE Linux Enterprise Desktop 12)
B GNU Licenses
B.1 GNU Free Documentation License

Copyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

About This Guide

  • Filename: tuning_intro.xml
  • ID: preface.tuning

SUSE Linux Enterprise Desktop is used for a broad range of usage scenarios in enterprise and scientific data centers. SUSE has ensured SUSE Linux Enterprise Desktop is set up in a way that it accommodates different operation purposes with optimal performance. However, SUSE Linux Enterprise Desktop must meet very different demands when employed on a number crunching server compared to a file server, for example.

It is not possible to ship a distribution that is optimized for all workloads. Different workloads vary substantially in some aspects. Most important among those are I/O access patterns, memory access patterns, and process scheduling. A behavior that perfectly suits a certain workload might reduce performance of another workload. For example, I/O-intensive tasks, such as handling database requests, usually have completely different requirements than CPU-intensive tasks, such as video encoding. The versatility of Linux makes it possible to configure your system in a way that it brings out the best in each usage scenario.

This manual introduces you to means to monitor and analyze your system. It describes methods to manage system resources and to tune your system. This guide does not offer recipes for special scenarios, because each server has got its own different demands. It rather enables you to thoroughly analyze your servers and make the most out of them.

Part I, “Basics”

Tuning a system requires a carefully planned proceeding. Learn which steps are necessary to successfully improve your system.

Part II, “System Monitoring”

Linux offers a large variety of tools to monitor almost every aspect of the system. Learn how to use these utilities and how to read and analyze the system log files.

Part III, “Kernel Monitoring”

The Linux kernel itself offers means to examine every nut, bolt and screw of the system. This part introduces you to SystemTap, a scripting language for writing kernel modules that can be used to analyze and filter data. Collect debugging information and find bottlenecks by using kernel probes and Perf. Last, monitor applications with Oprofile.

Part IV, “Resource Management”

Learn how to set up a tailor-made system fitting exactly the server's need. Get to know how to use power management while at the same time keeping the performance of a system at a level that matches the current requirements.

Part V, “Kernel Tuning”

The Linux kernel can be optimized either by using sysctl, via the /proc and /sys file systems or by kernel command line parameters. This part covers tuning the I/O performance and optimizing the way how Linux schedules processes. It also describes basic principles of memory management and shows how memory management can be fine-tuned to suit needs of specific applications and usage patterns. Furthermore, it describes how to optimize network performance.

Part VI, “Handling System Dumps”

This part enables you to analyze and handle application or system crashes. It introduces tracing tools such as strace or ltrace and describes how to handle system crashes using Kexec and Kdump.

Tip
Tip: Getting the SUSE Linux Enterprise SDK

The SDK is a module for SUSE Linux Enterprise and is available via an online channel from the SUSE Customer Center. Alternatively, go to http://download.suse.com/, search for SUSE Linux Enterprise Software Development Kit and download it from there. Refer to Chapter 11, Installing Modules, Extensions, and Third Party Add-On Products for details.

1 Available Documentation

  • Filename: common_intro_available_doc_i.xml
  • ID: no ID found
Note
Note: Online Documentation and Latest Updates

Documentation for our products is available at http://www.suse.com/documentation/, where you can also find the latest updates, and browse or download the documentation in various formats.

In addition, the product documentation is usually available in your installed system under /usr/share/doc/manual.

The following documentation is available for this product:

Installation Quick Start

Lists the system requirements and guides you step-by-step through the installation of SUSE Linux Enterprise Desktop from DVD, or from an ISO image.

Deployment Guide

Shows how to install single or multiple systems and how to exploit the product inherent capabilities for a deployment infrastructure. Choose from various approaches, ranging from a local installation or a network installation server to a mass deployment using a remote-controlled, highly-customized, and automated installation technique.

Administration Guide

Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.

Security Guide

Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.

System Analysis and Tuning Guide

An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.

GNOME User Guide

Introduces the GNOME desktop of SUSE Linux Enterprise Desktop. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.

2 Feedback

  • Filename: common_intro_feedback_i.xml
  • ID: no ID found

Several feedback channels are available:

Bugs and Enhancement Requests

For services and support options available for your product, refer to http://www.suse.com/support/.

Help for openSUSE is provided by the community. Refer to https://en.opensuse.org/Portal:Support for more information.

To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click Create New.

User Comments

We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/documentation/feedback.html and enter your comments there.

Mail

For feedback on the documentation of this product, you can also send a mail to doc-team@suse.com. Make sure to include the document title, the product version and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).

3 Documentation Conventions

  • Filename: common_intro_typografie_i.xml
  • ID: no ID found

The following notices and typographical conventions are used in this documentation:

  • /etc/passwd: directory names and file names

  • PLACEHOLDER: replace PLACEHOLDER with the actual value

  • PATH: the environment variable PATH

  • ls, --help: commands, options, and parameters

  • user: users or groups

  • package name : name of a package

  • Alt, AltF1: a key to press or a key combination; keys are shown in uppercase as on a keyboard

  • File, File › Save As: menu items, buttons

  • Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.

  • Commands that must be run with root privileges. Often you can also prefix these commands with the sudo command to run them as non-privileged user.

    root # command
    tux > sudo command
  • Commands that can be run by non-privileged users.

    tux > command
  • Notices

    Warning
    Warning: Warning Notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important
    Important: Important Notice

    Important information you should be aware of before proceeding.

    Note
    Note: Note Notice

    Additional information, for example about differences in software versions.

    Tip
    Tip: Tip Notice

    Helpful information, like a guideline or a piece of practical advice.

Part I Basics

1 General Notes on System Tuning

This manual discusses how to find the reasons for performance problems and provides means to solve these problems. Before you start tuning your system, you should make sure you have ruled out common problems and have found the cause for the problem. You should also have a detailed plan on how to tune the system, because applying random tuning tips often will not help and could make things worse.

1 General Notes on System Tuning

  • Filename: tuning_how.xml
  • ID: cha.tuning.basics
Abstract

This manual discusses how to find the reasons for performance problems and provides means to solve these problems. Before you start tuning your system, you should make sure you have ruled out common problems and have found the cause for the problem. You should also have a detailed plan on how to tune the system, because applying random tuning tips often will not help and could make things worse.

Procedure 1.1: General Approach When Tuning a System
  1. Specify the problem that needs to be solved.

  2. In case the degradation is new, identify any recent changes to the system.

  3. Identify why the issue is considered a performance problem.

  4. Specify a metric that can be used to analyze performance. This metric could for example be latency, throughput, the maximum number of users that are simultaneously logged in, or the maximum number of active users.

  5. Measure current performance using the metric from the previous step.

  6. Identify the subsystem(s) where the application is spending the most time.

    1. Monitor the system and/or the application.

    2. Analyze the data, categorize where time is being spent.

  7. Tune the subsystem identified in the previous step.

  8. Remeasure the current performance without monitoring using the same metric as before.

  9. If performance is still not acceptable, start over with Step 3.

1.1 Be Sure What Problem to Solve

Before starting to tuning a system, try to describe the problem as exactly as possible. A statement like The system is slow! is not a helpful problem description. For example, it could make a difference whether the system speed needs to be improved in general or only at peak times.

Furthermore, make sure you can apply a measurement to your problem, otherwise you cannot verify if the tuning was a success or not. You should always be able to compare before and after. Which metrics to use depends on the scenario or application you are looking into. Relevant Web server metrics, for example, could be expressed in terms of:

Latency

The time to deliver a page

Throughput

Number of pages served per second or megabytes transferred per second

Active Users

The maximum number of users that can be downloading pages while still receiving pages within an acceptable latency

1.2 Rule Out Common Problems

A performance problem often is caused by network or hardware problems, bugs, or configuration issues. Make sure to rule out problems such as the ones listed below before attempting to tune your system:

  • Check the output of the systemd journal (see Chapter 16, journalctl: Query the systemd Journal) for unusual entries.

  • Check (using top or ps) whether a certain process misbehaves by eating up unusual amounts of CPU time or memory.

  • Check for network problems by inspecting /proc/net/dev.

  • In case of I/O problems with physical disks, make sure it is not caused by hardware problems (check the disk with the smartmontools) or by a full disk.

  • Ensure that background jobs are scheduled to be carried out in times the server load is low. Those jobs should also run with low priority (set via nice).

  • If the machine runs several services using the same resources, consider moving services to another server.

  • Last, make sure your software is up-to-date.

1.3 Finding the Bottleneck

Finding the bottleneck very often is the hardest part when tuning a system. SUSE Linux Enterprise Desktop offers many tools to help you with this task. See Part II, “System Monitoring” for detailed information on general system monitoring applications and log file analysis. If the problem requires a long-time in-depth analysis, the Linux kernel offers means to perform such analysis. See Part III, “Kernel Monitoring” for coverage.

Once you have collected the data, it needs to be analyzed. First, inspect if the server's hardware (memory, CPU, bus) and its I/O capacities (disk, network) are sufficient. If these basic conditions are met, the system might benefit from tuning.

1.4 Step-by-step Tuning

Make sure to carefully plan the tuning itself. It is of vital importance to only do one step at a time. Only by doing so you can measure if the change provided an improvement or even had a negative impact. Each tuning activity should be measured over a sufficient time period to ensure you can do an analysis based on significant data. If you cannot measure a positive effect, do not make the change permanent. Chances are, that it might have a negative effect in the future.

Part II System Monitoring

2 System Monitoring Utilities

There are number of programs, tools, and utilities which you can use to examine the status of your system. This chapter introduces some and describes their most important and frequently used parameters.

3 Analyzing and Managing System Log Files

System log file analysis is one of the most important tasks when analyzing the system. In fact, looking at the system log files should be the first thing to do when maintaining or troubleshooting a system. SUSE Linux Enterprise Desktop automatically logs almost everything that happens on the system …

2 System Monitoring Utilities

  • Filename: utilities.xml
  • ID: cha.util
Abstract

There are number of programs, tools, and utilities which you can use to examine the status of your system. This chapter introduces some and describes their most important and frequently used parameters.

Note
Note: Gathering and Analyzing System Information with supportconfig

Apart from the utilities presented in the following, SUSE Linux Enterprise Desktop also contains supportconfig, a tool to create reports about the system such as: current kernel version, hardware, installed packages, partition setup and much more. These reports are used to provide the SUSE support with needed information in case a support ticket is created. However, they can also be analyzed for known issues to help resolve problems faster. For this purpose, SUSE Linux Enterprise Desktop provides both an appliance and a command line tool for Supportconfig Analysis (SCA). See Chapter 33, Gathering System Information for Support for details.

For each of the described commands, examples of the relevant outputs are presented. In the examples, the first line is the command itself (after the tux > or root #). Omissions are indicated with square brackets ([...]) and long lines are wrapped where necessary. Line breaks for long lines are indicated by a backslash (\).

tux > command -x -y
output line 1
output line 2
output line 3 is annoyingly long, so long that \
    we need to break it
output line 4
[...]
output line 98
output line 99

The descriptions have been kept short so that we can include as many utilities as possible. Further information for all the commands can be found in the manual pages. Most of the commands also understand the parameter --help, which produces a brief list of possible parameters.

2.1 Multi-Purpose Tools

While most Linux system monitoring tools monitor only a single aspect of the system, there are a few tools with a broader scope. To get an overview and find out which part of the system to examine further, use these tools first.

2.1.1 vmstat

vmstat collects information about processes, memory, I/O, interrupts and CPU. If called without a sampling rate, it displays average values since the last reboot. When called with a sampling rate, it displays actual samples:

Example 2.1: vmstat Output on a Lightly Used Machine
tux > vmstat 2
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0  44264  81520    424 935736    0    0    12    25   27   34  1  0 98   0  0
 0  0  44264  81552    424 935736    0    0     0     0   38   25  0  0 100  0  0
 0  0  44264  81520    424 935732    0    0     0     0   23   15  0  0 100  0  0
 0  0  44264  81520    424 935732    0    0     0     0   36   24  0  0 100  0  0
 0  0  44264  81552    424 935732    0    0     0     0   51   38  0  0 100  0  0
Example 2.2: vmstat Output on a Heavily Used Machine (CPU bound)
tux > vmstat 2
procs -----------memory----------- ---swap-- -----io---- -system-- -----cpu------
 r  b   swpd   free   buff   cache   si   so    bi    bo   in   cs us sy id wa st
32  1  26236 459640 110240 6312648    0    0  9944     2 4552 6597 95  5  0  0  0
23  1  26236 396728 110336 6136224    0    0  9588     0 4468 6273 94  6  0  0  0
35  0  26236 554920 110508 6166508    0    0  7684 27992 4474 4700 95  5  0  0  0
28  0  26236 518184 110516 6039996    0    0 10830     4 4446 4670 94  6  0  0  0
21  5  26236 716468 110684 6074872    0    0  8734 20534 4512 4061 96  4  0  0  0
Tip
Tip: First Line of Output

The first line of the vmstat output always displays average values since the last reboot.

The columns show the following:

r

Shows the number of processes in a runnable state. These processes are either executing or waiting for a free CPU slot. If the number of processes in this column is constantly higher than the number of CPUs available, this may be an indication of insufficient CPU power.

b

Shows the number of processes waiting for a resource other than a CPU. A high number in this column may indicate an I/O problem (network or disk).

swpd

The amount of swap space (KB) currently used.

free

The amount of unused memory (KB).

inact

Recently unused memory that can be reclaimed. This column is only visible when calling vmstat with the parameter -a (recommended).

active

Recently used memory that normally does not get reclaimed. This column is only visible when calling vmstat with the parameter -a (recommended).

buff

File buffer cache (KB) in RAM that contains file system metadata. This column is not visible when calling vmstat with the parameter -a.

cache

Page cache (KB) in RAM with the actual contents of files. This column is not visible when calling vmstat with the parameter -a.

si / so

Amount of data (KB) that is moved from swap to RAM (si) or from RAM to swap (so) per second. High so values over a long period of time may indicate that an application is leaking memory and the leaked memory is being swapped out. High si values over a long period of time could mean that an application that was inactive for a very long time is now active again. Combined high si and so values for prolonged periods of time are evidence of swap thrashing and may indicate that more RAM needs to be installed in the system because there is not enough memory to hold the working set size.

bi

Number of blocks per second received from a block device (for example, a disk read). Note that swapping also impacts the values shown here. The block size may vary between file systems but can be determined using the stat utility. If throughput data is required then iostat may be used.

bo

Number of blocks per second sent to a block device (for example, a disk write). Note that swapping also impacts the values shown here.

in

Interrupts per second. A high value may indicate a high I/O level (network and/or disk), but could also be triggered for other reasons such as inter-processor interrupts triggered by another activity. Make sure to also check /proc/interrupts to identify the source of interrupts.

cs

Number of context switches per second. This is the number of times that the kernel replaces executable code of one program in memory with that of another program.

us

Percentage of CPU usage executing application code.

sy

Percentage of CPU usage executing kernel code.

id

Percentage of CPU time spent idling. If this value is zero over a longer time, your CPU(s) are working to full capacity. This is not necessarily a bad sign—rather refer to the values in columns r and b to determine if your machine is equipped with sufficient CPU power.

wa

If "wa" time is non-zero, it indicates throughput lost because of waiting for I/O. This may be inevitable, for example, if a file is being read for the first time, background writeback cannot keep up, and so on. It can also be an indicator for a hardware bottleneck (network or hard disk). Lastly, it can indicate a potential for tuning the virtual memory manager (refer to Chapter 14, Tuning the Memory Management Subsystem).

st

Percentage of CPU time stolen from a virtual machine.

See vmstat --help for more options.

2.1.2 dstat

  • Filename: tuning_utilities_dstat.xml
  • ID: sec.util.multi.dstat

dstat is a replacement for tools such as vmstat, iostat, netstat, or ifstat. dstat displays information about the system resources in real time. For example, you can compare disk usage in combination with interrupts from the IDE controller, or compare network bandwidth with the disk throughput (in the same interval).

By default, its output is presented in readable tables. Alternatively, CSV output can be produced which is suitable as a spreadsheet import format.

It is written in Python and can be enhanced with plug-ins.

This is the general syntax:

dstat [-afv] [OPTIONS..] [DELAY [COUNT]]

All options and parameters are optional. Without any parameter, dstat displays statistics about CPU (-c, --cpu), disk (-d, --disk), network (-n, --net), paging (-g, --page), and the interrupts and context switches of the system (-y, --sys); it refreshes the output every second ad infinitum:

root # dstat
You did not select any stats, using -cdngy by default.
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read  writ| recv  send|  in   out | int   csw
  0   0 100   0   0   0|  15k   44k|   0     0 |   0    82B| 148   194
  0   0 100   0   0   0|   0     0 |5430B  170B|   0     0 | 163   187
  0   0 100   0   0   0|   0     0 |6363B  842B|   0     0 | 196   185
-a, --all

equal to -cdngy (default)

-f, --full

expand -C, -D, -I, -N and -S discovery lists

-v, --vmstat

equal to -pmgdsc, -D total

DELAY

delay in seconds between each update

COUNT

the number of updates to display before exiting

The default delay is 1 and the count is unspecified (unlimited).

For more information, see the man page of dstat and its Web page at http://dag.wieers.com/home-made/dstat/.

2.1.3 System Activity Information: sar

sar can generate extensive reports on almost all important system activities, among them CPU, memory, IRQ usage, IO, or networking. It can also generate reports on the fly. sar gathers all their data from the /proc file system.

Note
Note: sysstat Package

sar is a part of the sysstat package either with YaST, or with zypper in sysstat.

2.1.3.1 Generating reports with sar

To generate reports on the fly, call sar with an interval (seconds) and a count. To generate reports from files specify a file name with the option -f instead of interval and count. If file name, interval and count are not specified, sar attempts to generate a report from /var/log/sa/saDD, where DD stands for the current day. This is the default location to where sadc (the system activity data collector) writes its data. Query multiple files with multiple -f options.

sar 2 10                         # on-the-fly report, 10 times every 2 seconds
sar -f ~/reports/sar_2014_07_17  # queries file sar_2014_07_17
sar                              # queries file from today in /var/log/sa/
cd /var/log/sa && \
sar -f sa01 -f sa02              # queries files /var/log/sa/0[12]

Find examples for useful sar calls and their interpretation below. For detailed information on the meaning of each column, refer to the man (1) of sar. Also refer to the man page for more options and reports—sar offers plenty of them.

2.1.3.1.1 CPU Usage Report: sar

When called with no options, sar shows a basic report about CPU usage. On multi-processor machines, results for all CPUs are summarized. Use the option -P ALL to also see statistics for individual CPUs.

root # sar 10 5
Linux 4.4.21-64-default (jupiter)         10/12/16        _x86_64_        (2 CPU)

17:51:29        CPU     %user     %nice   %system   %iowait    %steal     %idle
17:51:39        all     57,93      0,00      9,58      1,01      0,00     31,47
17:51:49        all     32,71      0,00      3,79      0,05      0,00     63,45
17:51:59        all     47,23      0,00      3,66      0,00      0,00     49,11
17:52:09        all     53,33      0,00      4,88      0,05      0,00     41,74
17:52:19        all     56,98      0,00      5,65      0,10      0,00     37,27
Average:        all     49,62      0,00      5,51      0,24      0,00     44,62

%iowait displays the percentage of time that the CPU was idle while waiting for an I/O request. If this value is significantly higher than zero over a longer time, there is a bottleneck in the I/O system (network or hard disk). If the %idle value is zero over a longer time, your CPU is working at capacity.

2.1.3.1.2 Memory Usage Report: sar -r

Generate an overall picture of the system memory (RAM) by using the option -r:

root # sar -r 10 5
Linux 4.4.21-64-default (jupiter)         10/12/16        _x86_64_        (2 CPU)

17:55:27 kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
17:55:37    104232   1834624    94.62        20   627340  2677656   66.24   802052  828024    1744
17:55:47     98584   1840272    94.92        20   624536  2693936   66.65   808872  826932    2012
17:55:57     87088   1851768    95.51        20   605288  2706392   66.95   827260  821304    1588
17:56:07     86268   1852588    95.55        20   599240  2739224   67.77   829764  820888    3036
17:56:17    104260   1834596    94.62        20   599864  2730688   67.56   811284  821584    3164
Average:     96086   1842770    95.04        20   611254  2709579   67.03   815846  823746    2309

The columns kbcommit and %commit show an approximation of the maximum amount of memory (RAM and swap) that the current workload could need. While kbcommit displays the absolute number in kilobytes, %commit displays a percentage.

2.1.3.1.3 Paging Statistics Report: sar -B

Use the option -B to display the kernel paging statistics.

root # sar -B 10 5
Linux 4.4.21-64-default (jupiter)         10/12/16        _x86_64_        (2 CPU)

18:23:01 pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s pgscand/s pgsteal/s %vmeff
18:23:11   366.80     11.60  542.50     1.10  4354.80      0.00      0.00      0.00   0.00
18:23:21     0.00    333.30 1522.40     0.00 18132.40      0.00      0.00      0.00   0.00
18:23:31    47.20    127.40 1048.30     0.10 11887.30      0.00      0.00      0.00   0.00
18:23:41    46.40      2.50  336.10     0.10  7945.00      0.00      0.00      0.00   0.00
18:23:51     0.00    583.70 2037.20     0.00 17731.90      0.00      0.00      0.00   0.00
Average:    92.08    211.70 1097.30     0.26 12010.28      0.00      0.00      0.00   0.00

The majflt/s (major faults per second) column shows how many pages are loaded from disk into memory. The source of the faults may be file accesses or faults. At times, many major faults are normal. For example, during application start-up time. If major faults are experienced for the entire lifetime of the application it may be an indication that there is insufficient main memory, particularly if combined with large amounts of direct scanning (pgscand/s).

The %vmeff column shows the number of pages scanned (pgscand/s) in relation to the ones being reused from the main memory cache or the swap cache (pgsteal/s). It is a measurement of the efficiency of page reclaim. Healthy values are either near 100 (every inactive page swapped out is being reused) or 0 (no pages have been scanned). The value should not drop below 30.

2.1.3.1.4 Block Device Statistics Report: sar -d

Use the option -d to display the block device (hard disk, optical drive, USB storage device, etc.). Make sure to use the additional option -p (pretty-print) to make the DEV column readable.

root # sar -d -p 10 5
 Linux 4.4.21-64-default (jupiter)         10/12/16        _x86_64_        (2 CPU)

18:46:09 DEV   tps rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
18:46:19 sda  1.70    33.60      0.00     19.76      0.00      0.47      0.47      0.08
18:46:19 sr0  0.00     0.00      0.00      0.00      0.00      0.00      0.00      0.00

18:46:19 DEV   tps rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
18:46:29 sda  8.60   114.40    518.10     73.55      0.06      7.12      0.93      0.80
18:46:29 sr0  0.00     0.00      0.00      0.00      0.00      0.00      0.00      0.00

18:46:29 DEV   tps rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
18:46:39 sda 40.50  3800.80    454.90    105.08      0.36      8.86      0.69      2.80
18:46:39 sr0  0.00     0.00      0.00      0.00      0.00      0.00      0.00      0.00

18:46:39 DEV   tps rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
18:46:49 sda  1.40     0.00    204.90    146.36      0.00      0.29      0.29      0.04
18:46:49 sr0  0.00     0.00      0.00      0.00      0.00      0.00      0.00      0.00

18:46:49 DEV   tps rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
18:46:59 sda  3.30     0.00    503.80    152.67      0.03      8.12      1.70      0.56
18:46:59 sr0  0.00     0.00      0.00      0.00      0.00      0.00      0.00      0.00

Average: DEV   tps rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
Average: sda 11.10   789.76    336.34    101.45      0.09      8.07      0.77      0.86
Average: sr0  0.00     0.00      0.00      0.00      0.00      0.00      0.00      0.00

Compare the Average values for tps, rd_sec/s, and wr_sec/s of all disks. Constantly high values in the svctm and %util columns could be an indication that I/O subsystem is a bottleneck.

If the machine uses multiple disks, then it is best if I/O is interleaved evenly between disks of equal speed and capacity. It will be necessary to take into account whether the storage has multiple tiers. Furthermore, if there are multiple paths to storage then consider what the link saturation will be when balancing how storage is used.

2.1.3.1.5 Network Statistics Reports: sar -n KEYWORD

The option -n lets you generate multiple network related reports. Specify one of the following keywords along with the -n:

  • DEV: Generates a statistic report for all network devices

  • EDEV: Generates an error statistics report for all network devices

  • NFS: Generates a statistic report for an NFS client

  • NFSD: Generates a statistic report for an NFS server

  • SOCK: Generates a statistic report on sockets

  • ALL: Generates all network statistic reports

2.1.3.2 Visualizing sar Data

sar reports are not always easy to parse for humans. kSar, a Java application visualizing your sar data, creates easy-to-read graphs. It can even generate PDF reports. kSar takes data generated on the fly and past data from a file. kSar is licensed under the BSD license and is available from https://sourceforge.net/projects/ksar/.

2.2 System Information

2.2.1 Device Load Information: iostat

To monitor the system device load, use iostat. It generates reports that can be useful for better balancing the load between physical disks attached to your system.

To be able to use iostat, install the package sysstat.

The first iostat report shows statistics collected since the system was booted. Subsequent reports cover the time since the previous report.

tux > iostat
Linux 4.4.21-64-default (jupiter)         10/12/16        _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          17.68    4.49    4.24    0.29    0.00   73.31

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb               2.02        36.74        45.73    3544894    4412392
sda               1.05         5.12        13.47     493753    1300276
sdc               0.02         0.14         0.00      13641         37

Invoking iostat in this way will help you find out whether throughput is different from your expectation, but not why. Such questions can be better answered by an extended report which can be generated by invoking iostat -x. Extended reports additionally include, for example, information on average queue sizes and average wait times. It may also be easier to evaluate the data if idle block devices are excluded using the -z switch. Find definitions for each of the displayed column titles in the man page of iostat (man 1 iostat).

You can also specify that a certain device should be monitored at specified intervals. For example, to generate five reports at three-second intervals for the device sda, use:

tux > iostat -p sda 3 5

To show statistics of network file systems (NFS), there are two similar utilities:

  • nfsiostat-sysstat is included with the package sysstat.

  • nfsiostat is included with the package nfs-client.

2.2.2 Processor Activity Monitoring: mpstat

The utility mpstat examines activities of each available processor. If your system has one processor only, the global average statistics will be reported.

The timing arguments work the same way as with the iostat command. Entering mpstat 2 5 prints five reports for all processors in two-second intervals.

root # mpstat 2 5
Linux 4.4.21-64-default (jupiter)         10/12/16        _x86_64_        (2 CPU)

13:51:10  CPU   %usr  %nice  %sys  %iowait  %irq  %soft  %steal  %guest  %gnice   %idle
13:51:12  all   8,27   0,00  0,50     0,00  0,00   0,00    0,00    0,00    0,00   91,23
13:51:14  all  46,62   0,00  3,01     0,00  0,00   0,25    0,00    0,00    0,00   50,13
13:51:16  all  54,71   0,00  3,82     0,00  0,00   0,51    0,00    0,00    0,00   40,97
13:51:18  all  78,77   0,00  5,12     0,00  0,00   0,77    0,00    0,00    0,00   15,35
13:51:20  all  51,65   0,00  4,30     0,00  0,00   0,51    0,00    0,00    0,00   43,54
Average:  all  47,85   0,00  3,34     0,00  0,00   0,40    0,00    0,00    0,00   48,41

From the mpstat data, you can see:

  • The ratio between the %usr and %sys. For example, a ratio of 10:1 indicates the workload is mostly running application code and analysis should focus on the application. A ratio of 1:10 indicates the workload is mostly kernel-bound and tuning the kernel is worth considering. Alternatively, determine why the application is kernel-bound and see if that can be alleviated.

  • Whether there is a subset of CPUs that are nearly fully utilized even if the system is lightly loaded overall. Few hot CPUs can indicate that the workload is not parallelized and could benefit from executing on a machine with a smaller number of faster processors.

2.2.3 Processor Frequency Monitoring: turbostat

turbostat shows frequencies, load, temperature, and power of AMD64/Intel 64 processors. It can operate in two modes: If called with a command, the command process is forked and statistics are displayed upon command completion. When run without a command, it will display updated statistics every five seconds. Note that turbostat requires the kernel module msr to be loaded.

tux > sudo turbostat find /etc -type d -exec true {} \;
0.546880 sec
     CPU Avg_MHz   Busy% Bzy_MHz TSC_MHz
       -     416   28.43    1465    3215
       0     631   37.29    1691    3215
       1     416   27.14    1534    3215
       2     270   24.30    1113    3215
       3     406   26.57    1530    3214
       4     505   32.46    1556    3214
       5     270   22.79    1184    3214

The output depends on the CPU type and may vary. To display more details such as temperature and power, use the --debug option. For more command line options and an explanation of the field descriptions, refer to man 8 turbostat.

2.2.4 Task Monitoring: pidstat

If you need to see what load a particular task applies to your system, use pidstat command. It prints activity of every selected task or all tasks managed by Linux kernel if no task is specified. You can also set the number of reports to be displayed and the time interval between them.

For example, pidstat -C firefox 2 3 prints the load statistic for tasks whose command name includes the string firefox. There will be three reports printed at two second intervals.

root # pidstat -C firefox 2 3
Linux 4.4.21-64-default (jupiter)         10/12/16        _x86_64_        (2 CPU)

14:09:11      UID       PID    %usr %system  %guest    %CPU   CPU  Command
14:09:13     1000       387   22,77    0,99    0,00   23,76     1  firefox

14:09:13      UID       PID    %usr %system  %guest    %CPU   CPU  Command
14:09:15     1000       387   46,50    3,00    0,00   49,50     1  firefox

14:09:15      UID       PID    %usr %system  %guest    %CPU   CPU  Command
14:09:17     1000       387   60,50    7,00    0,00   67,50     1  firefox

Average:      UID       PID    %usr %system  %guest    %CPU   CPU  Command
Average:     1000       387   43,19    3,65    0,00   46,84     -  firefox

Similarly, pidstat -d can be used to estimate how much I/O tasks are doing, whether they are sleeping on that I/O and how many clock ticks the task was stalled.

2.2.5 Kernel Ring Buffer: dmesg

The Linux kernel keeps certain messages in a ring buffer. To view these messages, enter the command dmesg -T.

Older events are logged in the systemd journal. See Chapter 16, journalctl: Query the systemd Journal for more information on the journal.

2.2.6 List of Open Files: lsof

To view a list of all the files open for the process with process ID PID, use -p. For example, to view all the files used by the current shell, enter:

root # lsof -p $$
COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF  NODE NAME
bash    8842 root  cwd    DIR   0,32      222  6772 /root
bash    8842 root  rtd    DIR   0,32      166   256 /
bash    8842 root  txt    REG   0,32   656584 31066 /bin/bash
bash    8842 root  mem    REG   0,32  1978832 22993 /lib64/libc-2.19.so
[...]
bash    8842 root    2u   CHR  136,2      0t0     5 /dev/pts/2
bash    8842 root  255u   CHR  136,2      0t0     5 /dev/pts/2

The special shell variable $$, whose value is the process ID of the shell, has been used.

When used with -i, lsof lists currently open Internet files as well:

root # lsof -i
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
wickedd-d  917 root    8u  IPv4  16627      0t0  UDP *:bootpc
wickedd-d  918 root    8u  IPv6  20752      0t0  UDP [fe80::5054:ff:fe72:5ead]:dhcpv6-client
sshd      3152 root    3u  IPv4  18618      0t0  TCP *:ssh (LISTEN)
sshd      3152 root    4u  IPv6  18620      0t0  TCP *:ssh (LISTEN)
master    4746 root   13u  IPv4  20588      0t0  TCP localhost:smtp (LISTEN)
master    4746 root   14u  IPv6  20589      0t0  TCP localhost:smtp (LISTEN)
sshd      8837 root    5u  IPv4 293709      0t0  TCP jupiter.suse.de:ssh->venus.suse.de:33619 (ESTABLISHED)
sshd      8837 root    9u  IPv6 294830      0t0  TCP localhost:x11 (LISTEN)
sshd      8837 root   10u  IPv4 294831      0t0  TCP localhost:x11 (LISTEN)

2.2.7 Kernel and udev Event Sequence Viewer: udevadm monitor

udevadm monitor listens to the kernel uevents and events sent out by a udev rule and prints the device path (DEVPATH) of the event to the console. This is a sequence of events while connecting a USB memory stick:

Note
Note: Monitoring udev Events

Only root user is allowed to monitor udev events by running the udevadm command.

UEVENT[1138806687] add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2
UEVENT[1138806687] add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2/4-2.2
UEVENT[1138806687] add@/class/scsi_host/host4
UEVENT[1138806687] add@/class/usb_device/usbdev4.10
UDEV  [1138806687] add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2
UDEV  [1138806687] add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2/4-2.2
UDEV  [1138806687] add@/class/scsi_host/host4
UDEV  [1138806687] add@/class/usb_device/usbdev4.10
UEVENT[1138806692] add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2/4-2.2
UEVENT[1138806692] add@/block/sdb
UEVENT[1138806692] add@/class/scsi_generic/sg1
UEVENT[1138806692] add@/class/scsi_device/4:0:0:0
UDEV  [1138806693] add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2/4-2.2
UDEV  [1138806693] add@/class/scsi_generic/sg1
UDEV  [1138806693] add@/class/scsi_device/4:0:0:0
UDEV  [1138806693] add@/block/sdb
UEVENT[1138806694] add@/block/sdb/sdb1
UDEV  [1138806694] add@/block/sdb/sdb1
UEVENT[1138806694] mount@/block/sdb/sdb1
UEVENT[1138806697] umount@/block/sdb/sdb1

2.3 Processes

2.3.1 Interprocess Communication: ipcs

The command ipcs produces a list of the IPC resources currently in use:

root # ipcs
------ Message Queues --------
key        msqid      owner      perms      used-bytes   messages

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x00000000 65536      tux        600        524288     2          dest
0x00000000 98305      tux        600        4194304    2          dest
0x00000000 884738     root       600        524288     2          dest
0x00000000 786435     tux        600        4194304    2          dest
0x00000000 12058628   tux        600        524288     2          dest
0x00000000 917509     root       600        524288     2          dest
0x00000000 12353542   tux        600        196608     2          dest
0x00000000 12451847   tux        600        524288     2          dest
0x00000000 11567114   root       600        262144     1          dest
0x00000000 10911763   tux        600        2097152    2          dest
0x00000000 11665429   root       600        2336768    2          dest
0x00000000 11698198   root       600        196608     2          dest
0x00000000 11730967   root       600        524288     2          dest

------ Semaphore Arrays --------
key        semid      owner      perms      nsems
0xa12e0919 32768      tux        666        2

2.3.2 Process List: ps

The command ps produces a list of processes. Most parameters must be written without a minus sign. Refer to ps --help for a brief help or to the man page for extensive help.

To list all processes with user and command line information, use ps axu:

tux > ps axu
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.3  34376  4608 ?        Ss   Jul24   0:02 /usr/lib/systemd/systemd
root         2  0.0  0.0      0     0 ?        S    Jul24   0:00 [kthreadd]
root         3  0.0  0.0      0     0 ?        S    Jul24   0:00 [ksoftirqd/0]
root         5  0.0  0.0      0     0 ?        S<   Jul24   0:00 [kworker/0:0H]
root         6  0.0  0.0      0     0 ?        S    Jul24   0:00 [kworker/u2:0]
root         7  0.0  0.0      0     0 ?        S    Jul24   0:00 [migration/0]
[...]
tux      12583  0.0  0.1 185980  2720 ?        Sl   10:12   0:00 /usr/lib/gvfs/gvfs-mtp-volume-monitor
tux      12587  0.0  0.1 198132  3044 ?        Sl   10:12   0:00 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor
tux      12591  0.0  0.1 181940  2700 ?        Sl   10:12   0:00 /usr/lib/gvfs/gvfs-goa-volume-monitor
tux      12594  8.1 10.6 1418216 163564 ?      Sl   10:12   0:03 /usr/bin/gnome-shell
tux      12600  0.0  0.3 393448  5972 ?        Sl   10:12   0:00 /usr/lib/gnome-settings-daemon-3.0/gsd-printer
tux      12625  0.0  0.6 227776 10112 ?        Sl   10:12   0:00 /usr/lib/gnome-control-center-search-provider
tux      12626  0.5  1.5 890972 23540 ?        Sl   10:12   0:00 /usr/bin/nautilus --no-default-window
[...]

To check how many sshd processes are running, use the option -p together with the command pidof, which lists the process IDs of the given processes.

tux > ps -p $(pidof sshd)
  PID TTY      STAT   TIME COMMAND
 1545 ?        Ss     0:00 /usr/sbin/sshd -D
 4608 ?        Ss     0:00 sshd: root@pts/0

The process list can be formatted according to your needs. The option L returns a list of all keywords. Enter the following command to issue a list of all processes sorted by memory usage:

tux > ps ax --format pid,rss,cmd --sort rss
  PID   RSS CMD
  PID   RSS CMD
    2     0 [kthreadd]
    3     0 [ksoftirqd/0]
    4     0 [kworker/0:0]
    5     0 [kworker/0:0H]
    6     0 [kworker/u2:0]
    7     0 [migration/0]
    8     0 [rcu_bh]
[...]
12518 22996 /usr/lib/gnome-settings-daemon-3.0/gnome-settings-daemon
12626 23540 /usr/bin/nautilus --no-default-window
12305 32188 /usr/bin/Xorg :0 -background none -verbose
12594 164900 /usr/bin/gnome-shell
Useful ps Calls
ps aux--sort COLUMN

Sort the output by COLUMN. Replace COLUMN with

pmem for physical memory ratio
pcpu for CPU ratio
rss for resident set size (non-swapped physical memory)
ps axo pid,%cpu,rss,vsz,args,wchan

Shows every process, their PID, CPU usage ratio, memory size (resident and virtual), name, and their syscall.

ps axfo pid,args

Show a process tree.

2.3.3 Process Tree: pstree

The command pstree produces a list of processes in the form of a tree:

tux > pstree
systemd---accounts-daemon---{gdbus}
        |                 |-{gmain}
        |-at-spi-bus-laun---dbus-daemon
        |                 |-{dconf worker}
        |                 |-{gdbus}
        |                 |-{gmain}
        |-at-spi2-registr---{gdbus}
        |-cron
        |-2*[dbus-daemon]
        |-dbus-launch
        |-dconf-service---{gdbus}
        |               |-{gmain}
        |-gconfd-2
        |-gdm---gdm-simple-slav---Xorg
        |     |                 |-gdm-session-wor---gnome-session---gnome-setti+
        |     |                 |                 |               |-gnome-shell+++
        |     |                 |                 |               |-{dconf work+
        |     |                 |                 |               |-{gdbus}
        |     |                 |                 |               |-{gmain}
        |     |                 |                 |-{gdbus}
        |     |                 |                 |-{gmain}
        |     |                 |-{gdbus}
        |     |                 |-{gmain}
        |     |-{gdbus}
        |     |-{gmain}
[...]

The parameter -p adds the process ID to a given name. To have the command lines displayed as well, use the -a parameter:

2.3.4 Table of Processes: top

The command top (an abbreviation of table of processes) displays a list of processes that is refreshed every two seconds. To terminate the program, press Q. The parameter -n 1 terminates the program after a single display of the process list. The following is an example output of the command top -n 1:

tux > top -n 1
Tasks: 128 total,   1 running, 127 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.4 us,  1.2 sy,  0.0 ni, 96.3 id,  0.1 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:   1535508 total,   699948 used,   835560 free,      880 buffers
KiB Swap:  1541116 total,        0 used,  1541116 free.   377000 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
    1 root      20   0  116292   4660   2028 S 0.000 0.303   0:04.45 systemd
    2 root      20   0       0      0      0 S 0.000 0.000   0:00.00 kthreadd
    3 root      20   0       0      0      0 S 0.000 0.000   0:00.07 ksoftirqd+
    5 root       0 -20       0      0      0 S 0.000 0.000   0:00.00 kworker/0+
    6 root      20   0       0      0      0 S 0.000 0.000   0:00.00 kworker/u+
    7 root      rt   0       0      0      0 S 0.000 0.000   0:00.00 migration+
    8 root      20   0       0      0      0 S 0.000 0.000   0:00.00 rcu_bh
    9 root      20   0       0      0      0 S 0.000 0.000   0:00.24 rcu_sched
   10 root      rt   0       0      0      0 S 0.000 0.000   0:00.01 watchdog/0
   11 root       0 -20       0      0      0 S 0.000 0.000   0:00.00 khelper
   12 root      20   0       0      0      0 S 0.000 0.000   0:00.00 kdevtmpfs
   13 root       0 -20       0      0      0 S 0.000 0.000   0:00.00 netns
   14 root       0 -20       0      0      0 S 0.000 0.000   0:00.00 writeback
   15 root       0 -20       0      0      0 S 0.000 0.000   0:00.00 kintegrit+
   16 root       0 -20       0      0      0 S 0.000 0.000   0:00.00 bioset
   17 root       0 -20       0      0      0 S 0.000 0.000   0:00.00 crypto
   18 root       0 -20       0      0      0 S 0.000 0.000   0:00.00 kblockd

By default the output is sorted by CPU usage (column %CPU, shortcut ShiftP). Use the following key combinations to change the sort field:

ShiftM: Resident Memory (RES)
ShiftN: Process ID (PID)
ShiftT: Time (TIME+)

To use any other field for sorting, press F and select a field from the list. To toggle the sort order, Use ShiftR.

The parameter -U UID monitors only the processes associated with a particular user. Replace UID with the user ID of the user. Use top -U $(id -u) to show processes of the current user

2.3.5 z Systems Hypervisor Monitor: hyptop

hyptop provides a dynamic real-time view of a z Systems hypervisor environment, using the kernel infrastructure via debugfs. It works with either the z/VM or the LPAR hypervisor. Depending on the available data it, for example, shows CPU and memory consumption of active LPARs or z/VM guests. It provides a curses based user interface similar to the top command. hyptop provides two windows:

  • sys_list: Lists systems that the currently hypervisor is running

  • sys: Shows one system in more detail

You can run hyptop in interactive mode (default) or in batch mode with the -b option. Help in the interactive mode is available by pressing ? after hyptop is started.

Output for the sys_list window under LPAR:

12:30:48 | CPU-T: IFL(18) CP(3) UN(3)     ?=help
system  #cpu    cpu   mgm    Cpu+  Mgm+   online
(str)    (#)    (%)   (%)    (hm)  (hm)    (dhm)
H05LP30   10 461.14 10.18 1547:41  8:15 11:05:59
H05LP33    4 133.73  7.57  220:53  6:12 11:05:54
H05LP50    4  99.26  0.01  146:24  0:12 10:04:24
H05LP02    1  99.09  0.00  269:57  0:00 11:05:58
TRX2CFA    1   2.14  0.03    3:24  0:04 11:06:01
H05LP13    6   1.36  0.34    4:23  0:54 11:05:56
TRX1      19   1.22  0.14   13:57  0:22 11:06:01
TRX2      20   1.16  0.11   26:05  0:25 11:06:00
H05LP55    2   0.00  0.00    0:22  0:00 11:05:52
H05LP56    3   0.00  0.00    0:00  0:00 11:05:52
         413 823.39 23.86 3159:57 38:08 11:06:01

Output for the "sys_list" window under z/VM:

12:32:21 | CPU-T: UN(16)                          ?=help
system   #cpu    cpu    Cpu+   online memuse memmax wcur
(str)     (#)    (%)    (hm)    (dhm)  (GiB)  (GiB)  (#)
T6360004    6 100.31  959:47 53:05:20   1.56   2.00  100
T6360005    2   0.44    1:11  3:02:26   0.42   0.50  100
T6360014    2   0.27    0:45 10:18:41   0.54   0.75  100
DTCVSW1     1   0.00    0:00 53:16:42   0.01   0.03  100
T6360002    6   0.00  166:26 40:19:18   1.87   2.00  100
OPERATOR    1   0.00    0:00 53:16:42   0.00   0.03  100
T6360008    2   0.00    0:37 30:22:55   0.32   0.75  100
T6360003    6   0.00 3700:57 53:03:09   4.00   4.00  100
NSLCF1      1   0.00    0:02 53:16:41   0.03   0.25  500
EREP        1   0.00    0:00 53:16:42   0.00   0.03  100
PERFSVM     1   0.00    0:53  2:21:12   0.04   0.06    0
TCPIP       1   0.00    0:01 53:16:42   0.01   0.12 3000
DATAMOVE    1   0.00    0:05 53:16:42   0.00   0.03  100
DIRMAINT    1   0.00    0:04 53:16:42   0.01   0.03  100
DTCVSW2     1   0.00    0:00 53:16:42   0.01   0.03  100
RACFVM      1   0.00    0:00 53:16:42   0.01   0.02  100
           75 101.57 5239:47 53:16:42  15.46  22.50 3000

Output for the sys window under LPAR:

14:08:41 | H05LP30 | CPU-T: IFL(18) CP(3) UN(3)                  ? = help
cpuid   type    cpu   mgm visual.
(#)    (str)    (%)   (%) (vis)
0        IFL  96.91  1.96 |############################################ |
1        IFL  81.82  1.46 |#####################################        |
2        IFL  88.00  2.43 |########################################     |
3        IFL  92.27  1.29 |##########################################   |
4        IFL  83.32  1.05 |#####################################        |
5        IFL  92.46  2.59 |##########################################   |
6        IFL   0.00  0.00 |                                             |
7        IFL   0.00  0.00 |                                             |
8        IFL   0.00  0.00 |                                             |
9        IFL   0.00  0.00 |                                             |
             534.79 10.78

Output for the sys window under z/VM:

15:46:57 | T6360003 | CPU-T: UN(16)                  ? = help
cpuid     cpu visual
(#)       (%) (vis)
0      548.72 |#########################################    |
        548.72

2.3.6 A top-like I/O Monitor: iotop

The iotop utility displays a table of I/O usage by processes or threads.

Note
Note: Installing iotop

iotop is not installed by default. You need to install it manually with zypper in iotop as root.

iotop displays columns for the I/O bandwidth read and written by each process during the sampling period. It also displays the percentage of time the process spent while swapping in and while waiting on I/O. For each process, its I/O priority (class/level) is shown. In addition, the total I/O bandwidth read and written during the sampling period is displayed at the top of the interface.

  • The and keys change the sorting.

  • R reverses the sort order.

  • O toggles between showing all processes and threads (default view) and showing only those doing I/O. (This function is similar to adding --only on command line.)

  • P toggles between showing threads (default view) and processes. (This function is similar to --only.)

  • A toggles between showing the current I/O bandwidth (default view) and accumulated I/O operations since iotop was started. (This function is similar to --accumulated.)

  • I lets you change the priority of a thread or a process's threads.

  • Q quits iotop.

  • Pressing any other key will force a refresh.

Following is an example output of the command iotop --only, while find and emacs are running:

root # iotop --only
Total DISK READ: 50.61 K/s | Total DISK WRITE: 11.68 K/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
 3416 be/4 tux         50.61 K/s    0.00 B/s  0.00 %  4.05 % find /
  275 be/3 root        0.00 B/s    3.89 K/s  0.00 %  2.34 % [jbd2/sda2-8]
 5055 be/4 tux          0.00 B/s    3.89 K/s  0.00 %  0.04 % emacs

iotop can be also used in a batch mode (-b) and its output stored in a file for later analysis. For a complete set of options, see the manual page (man 8 iotop).

2.3.7 Modify a process's niceness: nice and renice

The kernel determines which processes require more CPU time than others by the process's nice level, also called niceness. The higher the nice level of a process is, the less CPU time it will take from other processes. Nice levels range from -20 (the least nice level) to 19. Negative values can only be set by root.

Adjusting the niceness level is useful when running a non time-critical process that lasts long and uses large amounts of CPU time. For example, compiling a kernel on a system that also performs other tasks. Making such a process nicer, ensures that the other tasks, for example a Web server, will have a higher priority.

Calling nice without any parameters prints the current niceness:

tux > nice
0

Running nice COMMAND increments the current nice level for the given command by 10. Using nice -n LEVEL COMMAND lets you specify a new niceness relative to the current one.

To change the niceness of a running process, use renice PRIORITY -p PROCESS_ID, for example:

tux > renice +5 3266

To renice all processes owned by a specific user, use the option -u USER. Process groups are reniced by the option -g PROCESS_GROUP_ID.

2.4 Memory

2.4.1 Memory Usage: free

The utility free examines RAM and swap usage. Details of both free and used memory and swap areas are shown:

tux > free
             total       used       free     shared    buffers     cached
Mem:      32900500   32703448     197052          0     255668    5787364
-/+ buffers/cache:   26660416    6240084
Swap:      2046972     304680    1742292

The options -b, -k, -m, -g show the output in bytes, KB, MB, or GB, respectively. The parameter -s delay ensures that the display is refreshed every DELAY seconds. For example, free -s 1.5 produces an update every 1.5 seconds.

2.4.2 Detailed Memory Usage: /proc/meminfo

Use /proc/meminfo to get more detailed information on memory usage than with free. Actually free uses some data from this file. See an example output from a 64-bit system below. Note that it slightly differs on 32-bit systems because of different memory management:

MemTotal:        1942636 kB
MemFree:         1294352 kB
MemAvailable:    1458744 kB
Buffers:             876 kB
Cached:           278476 kB
SwapCached:            0 kB
Active:           368328 kB
Inactive:         199368 kB
Active(anon):     288968 kB
Inactive(anon):    10568 kB
Active(file):      79360 kB
Inactive(file):   188800 kB
Unevictable:          80 kB
Mlocked:              80 kB
SwapTotal:       2103292 kB
SwapFree:        2103292 kB
Dirty:                44 kB
Writeback:             0 kB
AnonPages:        288592 kB
Mapped:            70444 kB
Shmem:             11192 kB
Slab:              40916 kB
SReclaimable:      17712 kB
SUnreclaim:        23204 kB
KernelStack:        2000 kB
PageTables:        10996 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     3074608 kB
Committed_AS:    1407208 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      145996 kB
VmallocChunk:   34359588844 kB
HardwareCorrupted:     0 kB
AnonHugePages:     86016 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       79744 kB
DirectMap2M:     2017280 kB

These entries stand for the following:

MemTotal

Total amount of RAM.

MemFree

Amount of unused RAM.

MemAvailable

Estimate of how much memory is available for starting new applications without swapping.

Buffers

File buffer cache in RAM containing file system metadata.

Cached

Page cache in RAM. This excludes buffer cache and swap cache, but includes Shmem memory.

SwapCached

Page cache for swapped-out memory.

Active, Active(anon), Active(file)

Recently used memory that will not be reclaimed unless necessary or on explicit request. Active is the sum of Active(anon) and Active(file):

  • Active(anon) tracks swap-backed memory. This includes private and shared anonymous mappings and private file pages after copy-on-write.

  • Active(file) tracks other file system backed memory.

Inactive, Inactive(anon), Inactive(file)

Less recently used memory that will usually be reclaimed first. Inactive is the sum of Inactive(anon) and Inactive(file):

  • Inactive(anon) tracks swap backed memory. This includes private and shared anonymous mappings and private file pages after copy-on-write.

  • Inactive(file) tracks other file system backed memory.

Unevictable

Amount of memory that cannot be reclaimed (for example, because it is Mlocked or used as a RAM disk).

Mlocked

Amount of memory that is backed by the mlock system call. mlock allows processes to define which part of physical RAM their virtual memory should be mapped to. However, mlock does not guarantee this placement.

SwapTotal

Amount of swap space.

SwapFree

Amount of unused swap space.

Dirty

Amount of memory waiting to be written to disk, because it contains changes compared to the backing storage. Dirty data can be explicitly synchronized either by the application or by the kernel after a short delay. A large amount of dirty data may take considerable time to write to disk resulting in stalls. The total amount of dirty data that can exist at any time can be controlled with the sysctl parameters vm.dirty_ratio or vm.dirty_bytes (refer to Section 14.1.5, “Writeback” for more details).

Writeback

Amount of memory that is currently being written to disk.

Mapped

Memory claimed with the mmap system call.

Shmem

Memory shared between groups of processes, such as IPC data, tmpfs data, and shared anonymous memory.

Slab

Memory allocation for internal data structures of the kernel.

SReclaimable

Slab section that can be reclaimed, such as caches (inode, dentry, etc.).

SUnreclaim

Slab section that cannot be reclaimed.

KernelStack

Amount of kernel space memory used by applications (through system calls).

PageTables

Amount of memory dedicated to page tables of all processes.

NFS_Unstable

NFS pages that have already been sent to the server, but are not yet committed there.

Bounce

Memory used for bounce buffers of block devices.

WritebackTmp

Memory used by FUSE for temporary writeback buffers.

CommitLimit

Amount of memory available to the system based on the overcommit ratio setting. This is only enforced if strict overcommit accounting is enabled.

Committed_AS

An approximation of the total amount of memory (RAM and swap) that the current workload would need in the worst case.

VmallocTotal

Amount of allocated kernel virtual address space.

VmallocUsed

Amount of used kernel virtual address space.

VmallocChunk

The largest contiguous block of available kernel virtual address space.

HardwareCorrupted

Amount of failed memory (can only be detected when using ECC RAM).

AnonHugePages

Anonymous hugepages that are mapped into user space page tables. These are allocated transparently for processes without being specifically requested, therefore they are also known as transparent hugepages (THP).

HugePages_Total

Number of preallocated hugepages for use by SHM_HUGETLB and MAP_HUGETLB or through the hugetlbfs file system, as defined in /proc/sys/vm/nr_hugepages.

HugePages_Free

Number of hugepages available.

HugePages_Rsvd

Number of hugepages that are committed.

HugePages_Surp

Number of hugepages available beyond HugePages_Total (surplus), as defined in /proc/sys/vm/nr_overcommit_hugepages.

Hugepagesize

Size of a hugepage—on AMD64/Intel 64 the default is 2048 KB.

DirectMap4k etc.

Amount of kernel memory that is mapped to pages with a given size (in the example: 4 kB).

2.4.3 Process Memory Usage: smaps

Exactly determining how much memory a certain process is consuming is not possible with standard tools like top or ps. Use the smaps subsystem, introduced in kernel 2.6.14, if you need exact data. It can be found at /proc/PID/smaps and shows you the number of clean and dirty memory pages the process with the ID PID is using at that time. It differentiates between shared and private memory, so you can see how much memory the process is using without including memory shared with other processes. For more information see /usr/src/linux/Documentation/filesystems/proc.txt (requires the package kernel-source to be installed).

smaps is expensive to read. Therefore it is not recommended to monitor it regularly, but only when closely monitoring a certain process.

2.5 Networking

Tip
Tip: Traffic Shaping

In case the network bandwidth is lower than expected, you should first check if any traffic shaping rules are active for your network segment.

2.5.1 Basic Network Diagnostics: ip

ip is a powerful tool to set up and control network interfaces. You can also use it to quickly view basic statistics about network interfaces of the system. For example, whether the interface is up or how many errors, dropped packets, or packet collisions there are.

If you run ip with no additional parameter, it displays a help output. To list all network interfaces, enter ip addr show (or abbreviated as ip a). ip addr show up lists only running network interfaces. ip -s link show DEVICE lists statistics for the specified interface only:

root # ip -s link show br0
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT
    link/ether 00:19:d1:72:d4:30 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast
    6346104756 9265517  0       10860   0       0
    TX: bytes  packets  errors  dropped carrier collsns
    3996204683 3655523  0       0       0       0

ip can also show interfaces (link), routing tables (route), and much more—refer to man 8 ip for details.

root # ip route
default via 192.168.2.1 dev eth1
192.168.2.0/24 dev eth0  proto kernel  scope link  src 192.168.2.100
192.168.2.0/24 dev eth1  proto kernel  scope link  src 192.168.2.101
192.168.2.0/24 dev eth2  proto kernel  scope link  src 192.168.2.102
root # ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:44:30:51 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:a3:c1:fb brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:32:a4:09 brd ff:ff:ff:ff:ff:ff

2.5.2 Show the Network Usage of Processes: nethogs

In some cases, for example if the network traffic suddenly becomes very high, it is desirable to quickly find out which application(s) is/are causing the traffic. nethogs, a tool with a design similar to top, shows incoming and outgoing traffic for all relevant processes:

PID   USER  PROGRAM                                DEV   SENT   RECEIVED
27145 root   zypper                                eth0  5.719  391.749 KB/sec
?     root   ..0:113:80c0:8080:10:160:0:100:30015        0.102    2.326 KB/sec
26635 tux    /usr/lib64/firefox/firefox            eth0  0.026    0.026 KB/sec
?     root   ..0:113:80c0:8080:10:160:0:100:30045        0.000    0.021 KB/sec
?     root   ..0:113:80c0:8080:10:160:0:100:30045        0.000    0.018 KB/sec
?     root   ..0:113:80c0:8080:10:160:0:100:30015        0.000    0.018 KB/sec
?     root   ..0:113:80c0:8080:10:160:0:100:30045        0.000    0.017 KB/sec
?     root   ..0:113:80c0:8080:10:160:0:100:30045        0.000    0.017 KB/sec
?     root   ..0:113:80c0:8080:10:160:0:100:30045        0.069    0.000 KB/sec
?     root   unknown TCP                                 0.000    0.000 KB/sec

TOTAL                                                  5.916  394.192 KB/sec

Like in top, nethogs features interactive commands:

M: cycle between display modes (kb/s, kb, b, mb)
R: sort by RECEIVED
S: sort by SENT
Q: quit

2.5.3 Ethernet Cards in Detail: ethtool

ethtool can display and change detailed aspects of your Ethernet network device. By default it prints the current setting of the specified device.

root # ethtool eth0
Settings for eth0:
 Supported ports: [ TP ]
 Supported link modes:   10baseT/Half 10baseT/Full
                         100baseT/Half 100baseT/Full
                         1000baseT/Full
 Supports auto-negotiation: Yes
 Advertised link modes:  10baseT/Half 10baseT/Full
                         100baseT/Half 100baseT/Full
                         1000baseT/Full
 Advertised pause frame use: No
[...]
 Link detected: yes

The following table shows ethtool options that you can use to query the device for specific information:

Table 2.1: List of Query Options of ethtool

ethtool option

it queries the device for

-a

pause parameter information

-c

interrupt coalescing information

-g

Rx/Tx (receive/transmit) ring parameter information

-i

associated driver information

-k

offload information

-S

NIC and driver-specific statistics

2.5.4 Show the Network Status: ss

ss is a tool to dump socket statistics and replaces the netstat command. To list all connections use ss without parameters:

root # ss
Netid  State      Recv-Q Send-Q   Local Address:Port       Peer Address:Port
u_str  ESTAB      0      0                    * 14082                 * 14083
u_str  ESTAB      0      0                    * 18582                 * 18583
u_str  ESTAB      0      0                    * 19449                 * 19450
u_str  ESTAB      0      0      @/tmp/dbus-gmUUwXABPV 18784           * 18783
u_str  ESTAB      0      0      /var/run/dbus/system_bus_socket 19383 * 19382
u_str  ESTAB      0      0      @/tmp/dbus-gmUUwXABPV 18617           * 18616
u_str  ESTAB      0      0      @/tmp/dbus-58TPPDv8qv 19352           * 19351
u_str  ESTAB      0      0                    * 17658                 * 17657
u_str  ESTAB      0      0                    * 17693                 * 17694
[..]

To show all network ports currently open, use the following command:

root # ss -l
Netid  State      Recv-Q Send-Q      Local Address:Port  Peer Address:Port
nl     UNCONN     0      0                 rtnl:4195117                  *
nl     UNCONN     0      0       rtnl:wickedd-auto4/811                  *
nl     UNCONN     0      0       rtnl:wickedd-dhcp4/813                  *
nl     UNCONN     0      0                 rtnl:4195121                  *
nl     UNCONN     0      0                 rtnl:4195115                  *
nl     UNCONN     0      0       rtnl:wickedd-dhcp6/814                  *
nl     UNCONN     0      0                  rtnl:kernel                  *
nl     UNCONN     0      0             rtnl:wickedd/817                  *
nl     UNCONN     0      0                 rtnl:4195118                  *
nl     UNCONN     0      0                rtnl:nscd/706                  *
nl     UNCONN     4352   0              tcpdiag:ss/2381                  *
[...]

When displaying network connections, you can specify the socket type to display: TCP (-t) or UDP (-u) for example. The -p option shows the PID and name of the program to which each socket belongs.

The following example lists all TCP connections and the programs using these connections. The -a option make sure all established connections (listening and non-listening) are shown. The -p option shows the PID and name of the program to which each socket belongs.

root # ss -t -a -p
State    Recv-Q Send-Q  Local Address:Port   Peer Address:Port
LISTEN   0      128                  *:ssh                 *:*  users:(("sshd",1551,3))
LISTEN   0      100         127.0.0.1:smtp                 *:*  users:(("master",1704,13))
ESTAB    0      132      10.120.65.198:ssh  10.120.4.150:55715  users:(("sshd",2103,5))
LISTEN   0      128                 :::ssh                :::*  users:(("sshd",1551,4))
LISTEN   0      100               ::1:smtp                :::*  users:(("master",1704,14))

2.6 The /proc File System

The /proc file system is a pseudo file system in which the kernel reserves important information in the form of virtual files. For example, display the CPU type with this command:

tux > cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 30
model name      : Intel(R) Core(TM) i5 CPU         750  @ 2.67GHz
stepping        : 5
microcode       : 0x6
cpu MHz         : 1197.000
cache size      : 8192 KB
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ida dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips        : 5333.85
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:
[...]
Tip
Tip: Detailed Processor Information

Detailed information about the processor on the AMD64/Intel 64 architecture is also available by running x86info.

Query the allocation and use of interrupts with the following command:

tux > cat /proc/interrupts
           CPU0       CPU1       CPU2       CPU3
  0:        121          0          0          0   IO-APIC-edge      timer
  8:          0          0          0          1   IO-APIC-edge      rtc0
  9:          0          0          0          0   IO-APIC-fasteoi   acpi
 16:          0      11933          0          0   IO-APIC-fasteoi   ehci_hcd:+
 18:          0          0          0          0   IO-APIC-fasteoi   i801_smbus
 19:          0     117978          0          0   IO-APIC-fasteoi   ata_piix,+
 22:          0          0    3275185          0   IO-APIC-fasteoi   enp5s1
 23:     417927          0          0          0   IO-APIC-fasteoi   ehci_hcd:+
 40:    2727916          0          0          0  HPET_MSI-edge      hpet2
 41:          0    2749134          0          0  HPET_MSI-edge      hpet3
 42:          0          0    2759148          0  HPET_MSI-edge      hpet4
 43:          0          0          0    2678206  HPET_MSI-edge      hpet5
 45:          0          0          0          0   PCI-MSI-edge      aerdrv, P+
 46:          0          0          0          0   PCI-MSI-edge      PCIe PME,+
 47:          0          0          0          0   PCI-MSI-edge      PCIe PME,+
 48:          0          0          0          0   PCI-MSI-edge      PCIe PME,+
 49:          0          0          0        387   PCI-MSI-edge      snd_hda_i+
 50:     933117          0          0          0   PCI-MSI-edge      nvidia
NMI:       2102       2023       2031       1920   Non-maskable interrupts
LOC:         92         71         57         41   Local timer interrupts
SPU:          0          0          0          0   Spurious interrupts
PMI:       2102       2023       2031       1920   Performance monitoring int+
IWI:      47331      45725      52464      46775   IRQ work interrupts
RTR:          2          0          0          0   APIC ICR read retries
RES:     472911     396463     339792     323820   Rescheduling interrupts
CAL:      48389      47345      54113      50478   Function call interrupts
TLB:      28410      26804      24389      26157   TLB shootdowns
TRM:          0          0          0          0   Thermal event interrupts
THR:          0          0          0          0   Threshold APIC interrupts
MCE:          0          0          0          0   Machine check exceptions
MCP:         40         40         40         40   Machine check polls
ERR:          0
MIS:          0

The address assignment of executables and libraries is contained in the maps file:

tux > cat /proc/self/maps
08048000-0804c000 r-xp 00000000 03:03 17753      /bin/cat
0804c000-0804d000 rw-p 00004000 03:03 17753      /bin/cat
0804d000-0806e000 rw-p 0804d000 00:00 0          [heap]
b7d27000-b7d5a000 r--p 00000000 03:03 11867      /usr/lib/locale/en_GB.utf8/
b7d5a000-b7e32000 r--p 00000000 03:03 11868      /usr/lib/locale/en_GB.utf8/
b7e32000-b7e33000 rw-p b7e32000 00:00 0
b7e33000-b7f45000 r-xp 00000000 03:03 8837       /lib/libc-2.3.6.so
b7f45000-b7f46000 r--p 00112000 03:03 8837       /lib/libc-2.3.6.so
b7f46000-b7f48000 rw-p 00113000 03:03 8837       /lib/libc-2.3.6.so
b7f48000-b7f4c000 rw-p b7f48000 00:00 0
b7f52000-b7f53000 r--p 00000000 03:03 11842      /usr/lib/locale/en_GB.utf8/
[...]
b7f5b000-b7f61000 r--s 00000000 03:03 9109       /usr/lib/gconv/gconv-module
b7f61000-b7f62000 r--p 00000000 03:03 9720       /usr/lib/locale/en_GB.utf8/
b7f62000-b7f76000 r-xp 00000000 03:03 8828       /lib/ld-2.3.6.so
b7f76000-b7f78000 rw-p 00013000 03:03 8828       /lib/ld-2.3.6.so
bfd61000-bfd76000 rw-p bfd61000 00:00 0          [stack]
ffffe000-fffff000 ---p 00000000 00:00 0          [vdso]

A lot more information can be obtained from the /proc file system. Some important files and their contents are:

/proc/devices

Available devices

/proc/modules

Kernel modules loaded

/proc/cmdline

Kernel command line

/proc/meminfo

Detailed information about memory usage

/proc/config.gz

gzip-compressed configuration file of the kernel currently running

/proc/PID/

Find information about processes currently running in the /proc/NNN directories, where NNN is the process ID (PID) of the relevant process. Every process can find its own characteristics in /proc/self/.

Further information is available in the text file /usr/src/linux/Documentation/filesystems/proc.txt (this file is available when the package kernel-source is installed).

2.6.1 procinfo

Important information from the /proc file system is summarized by the command procinfo:

tux > procinfo
Linux 3.11.10-17-desktop (geeko@buildhost) (gcc 4.8.1 20130909) #1 4CPU [jupiter.example.com]

Memory:      Total        Used        Free      Shared     Buffers      Cached
Mem:       8181908     8000632      181276           0       85472     2850872
Swap:     10481660        1576    10480084

Bootup: Mon Jul 28 09:54:13 2014    Load average: 1.61 0.85 0.74 2/904 25949

user  :       1:54:41.84  12.7%  page in :    2107312  disk 1:    52212r   20199w
nice  :       0:00:00.46   0.0%  page out:    1714461  disk 2:    19387r   10928w
system:       0:25:38.00   2.8%  page act:     466673  disk 3:      548r      10w
IOwait:       0:04:16.45   0.4%  page dea:     272297
hw irq:       0:00:00.42   0.0%  page flt:  105754526
sw irq:       0:01:26.48   0.1%  swap in :          0
idle  :      12:14:43.65  81.5%  swap out:        394
guest :       0:02:18.59   0.2%
uptime:       3:45:22.24         context :   99809844

irq  0:       121 timer                 irq 41:   3238224 hpet3
irq  8:         1 rtc0                  irq 42:   3251898 hpet4
irq  9:         0 acpi                  irq 43:   3156368 hpet5
irq 16:     14589 ehci_hcd:usb1         irq 45:         0 aerdrv, PCIe PME
irq 18:         0 i801_smbus            irq 46:         0 PCIe PME, pciehp
irq 19:    124861 ata_piix, ata_piix, f irq 47:         0 PCIe PME, pciehp
irq 22:   3742817 enp5s1                irq 48:         0 PCIe PME, pciehp
irq 23:    479248 ehci_hcd:usb2         irq 49:       387 snd_hda_intel
irq 40:   3216894 hpet2                 irq 50:   1088673 nvidia

To see all the information, use the parameter -a. The parameter -nN produces updates of the information every N seconds. In this case, terminate the program by pressing Q.

By default, the cumulative values are displayed. The parameter -d produces the differential values. procinfo -dn5 displays the values that have changed in the last five seconds:

2.6.2 System Control Parameters: /proc/sys/

System control parameters are used to modify the Linux kernel parameters at runtime. They reside in /proc/sys/ and can be viewed and modified with the sysctl command. To list all parameters, run sysctl -a. A single parameter can be listed with sysctl PARAMETER_NAME.

Parameters are grouped into categories and can be listed with sysctl CATEGORY or by listing the contents of the respective directories. The most important categories are listed below. The links to further readings require the installation of the package kernel-source.

sysctl dev (/proc/sys/dev/)

Device-specific information.

sysctl fs (/proc/sys/fs/)

Used file handles, quotas, and other file system-oriented parameters. For details see /usr/src/linux/Documentation/sysctl/fs.txt.

sysctl kernel (/proc/sys/kernel/)

Information about the task scheduler, system shared memory, and other kernel-related parameters. For details see /usr/src/linux/Documentation/sysctl/kernel.txt

systctl net (/proc/sys/net/)

Information about network bridges, and general network parameters (mainly the ipv4/ subdirectory). For details see /usr/src/linux/Documentation/sysctl/net.txt

sysctl vm (/proc/sys/vm/)

Entries in this path relate to information about the virtual memory, swapping, and caching. For details see /usr/src/linux/Documentation/sysctl/vm.txt

To set or change a parameter for the current session, use the command sysctl -w PARAMETER=VALUE. To permanently change a setting, add a line PARAMETER=VALUE to /etc/sysctl.conf.

2.7 Hardware Information

2.7.1 PCI Resources: lspci

Note
Note: Accessing PCI configuration.

Most operating systems require root user privileges to grant access to the computer's PCI configuration.

The command lspci lists the PCI resources:

root # lspci
00:00.0 Host bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE \
    DRAM Controller/Host-Hub Interface (rev 01)
00:01.0 PCI bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE \
    Host-to-AGP Bridge (rev 01)
00:1d.0 USB Controller: Intel Corporation 82801DB/DBL/DBM \
    (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #1 (rev 01)
00:1d.1 USB Controller: Intel Corporation 82801DB/DBL/DBM \
    (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #2 (rev 01)
00:1d.2 USB Controller: Intel Corporation 82801DB/DBL/DBM \
    (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #3 (rev 01)
00:1d.7 USB Controller: Intel Corporation 82801DB/DBM \
    (ICH4/ICH4-M) USB2 EHCI Controller (rev 01)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 81)
00:1f.0 ISA bridge: Intel Corporation 82801DB/DBL (ICH4/ICH4-L) \
    LPC Interface Bridge (rev 01)
00:1f.1 IDE interface: Intel Corporation 82801DB (ICH4) IDE \
    Controller (rev 01)
00:1f.3 SMBus: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) \
    SMBus Controller (rev 01)
00:1f.5 Multimedia audio controller: Intel Corporation 82801DB/DBL/DBM \
    (ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 01)
01:00.0 VGA compatible controller: Matrox Graphics, Inc. G400/G450 (rev 85)
02:08.0 Ethernet controller: Intel Corporation 82801DB PRO/100 VE (LOM) \
    Ethernet Controller (rev 81)

Using -v results in a more detailed listing:

root # lspci -v
[...]
00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet \
Controller (rev 02)
  Subsystem: Intel Corporation PRO/1000 MT Desktop Adapter
  Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 19
  Memory at f0000000 (32-bit, non-prefetchable) [size=128K]
  I/O ports at d010 [size=8]
  Capabilities: [dc] Power Management version 2
  Capabilities: [e4] PCI-X non-bridge device
  Kernel driver in use: e1000
  Kernel modules: e1000

Information about device name resolution is obtained from the file /usr/share/pci.ids. PCI IDs not listed in this file are marked Unknown device.

The parameter -vv produces all the information that could be queried by the program. To view the pure numeric values, use the parameter -n.

2.7.2 USB Devices: lsusb

The command lsusb lists all USB devices. With the option -v, print a more detailed list. The detailed information is read from the directory /proc/bus/usb/. The following is the output of lsusb with these USB devices attached: hub, memory stick, hard disk and mouse.

root # lsusb
Bus 004 Device 007: ID 0ea0:2168 Ours Technology, Inc. Transcend JetFlash \
    2.0 / Astone USB Drive
Bus 004 Device 006: ID 04b4:6830 Cypress Semiconductor Corp. USB-2.0 IDE \
    Adapter
Bus 004 Device 005: ID 05e3:0605 Genesys Logic, Inc.
Bus 004 Device 001: ID 0000:0000
Bus 003 Device 001: ID 0000:0000
Bus 002 Device 001: ID 0000:0000
Bus 001 Device 005: ID 046d:c012 Logitech, Inc. Optical Mouse
Bus 001 Device 001: ID 0000:0000

2.7.3 Monitoring and Tuning the Thermal Subsystem: tmon

tmon is a tool to help visualize, tune, and test the complex thermal subsystem. When started without parameters, tmon runs in monitoring mode:

┌──────THERMAL ZONES(SENSORS)──────────────────────────────┐
│Thermal Zones:                 acpitz00                   │
│Trip Points:                   PC                         │
└──────────────────────────────────────────────────────────┘
┌─────────── COOLING DEVICES ──────────────────────────────┐
│ID  Cooling Dev   Cur    Max   Thermal Zone Binding       │
│00    Processor     0      3   ││││││││││││               │
│01    Processor     0      3   ││││││││││││               │
│02    Processor     0      3   ││││││││││││               │
│03    Processor     0      3   ││││││││││││               │
│04 intel_powerc    -1     50   ││││││││││││               │
└──────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────┐
│                         10        20        30        40 │
│acpitz 0:[  8][>>>>>>>>>P9                    C31         │
└──────────────────────────────────────────────────────────┘
┌────────────────── CONTROLS ──────────────────────────────┐
│PID gain: kp=0.36 ki=5.00 kd=0.19 Output 0.00             │
│Target Temp: 65.0C, Zone: 0, Control Device: None         │
└──────────────────────────────────────────────────────────┘

 Ctrl-c - Quit   TAB - Tuning

For detailed information on how to interpret the data, how to log thermal data and how to use tmon to test and tune cooling devices and sensors, refer to the man page: man 8 tmon. The package tmon is not installed by default.

2.7.4 MCELog: Machine Check Exceptions (MCE)

The mcelog package logs and parses/translates Machine Check Exceptions (MCE) on hardware errors (also including memory errors). Formerly this has been done by a cron job executed hourly. Now hardware errors are immediately processed by an mcelog daemon.

However, the mcelog service is not enabled by default, resulting in memory and CPU errors also not being logged by default. In addition, mcelog has a new feature to also handle predictive bad page offlining and automatic core offlining when cache errors happen.

The service can either be enabled and started via the YaST system services editor or via command line:

root # systemctl enable mcelog
root # systemctl start mcelog

2.7.5 x86_64: dmidecode: DMI Table Decoder

dmidecode shows the machine's DMI table containing information such as serial numbers and BIOS revisions of the hardware.

root # dmidecode
# dmidecode 2.12
SMBIOS 2.5 present.
27 structures occupying 1298 bytes.
Table at 0x000EB250.

Handle 0x0000, DMI type 4, 35 bytes
Processor Information
        Socket Designation: J1PR
        Type: Central Processor
        Family: Other
        Manufacturer: Intel(R) Corporation
        ID: E5 06 01 00 FF FB EB BF
        Version: Intel(R) Core(TM) i5 CPU         750  @ 2.67GHz
        Voltage: 1.1 V
        External Clock: 133 MHz
        Max Speed: 4000 MHz
        Current Speed: 2667 MHz
        Status: Populated, Enabled
        Upgrade: Other
        L1 Cache Handle: 0x0004
        L2 Cache Handle: 0x0003
        L3 Cache Handle: 0x0001
        Serial Number: Not Specified
        Asset Tag: Not Specified
        Part Number: Not Specified
[..]

2.8 Files and File Systems

2.8.1 Determine the File Type: file

The command file determines the type of a file or a list of files by checking /usr/share/misc/magic.

tux > file /usr/bin/file
/usr/bin/file: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), \
    for GNU/Linux 2.6.4, dynamically linked (uses shared libs), stripped

The parameter -f LIST specifies a file with a list of file names to examine. The -z allows file to look inside compressed files:

tux > file /usr/share/man/man1/file.1.gz
/usr/share/man/man1/file.1.gz: gzip compressed data, from Unix, max compression
tux > file -z /usr/share/man/man1/file.1.gz
/usr/share/man/man1/file.1.gz: troff or preprocessor input text \
    (gzip compressed data, from Unix, max compression)

The parameter -i outputs a mime type string rather than the traditional description.

tux > file -i /usr/share/misc/magic
/usr/share/misc/magic: text/plain charset=utf-8

2.8.2 File Systems and Their Usage: mount, df and du

The command mount shows which file system (device and type) is mounted at which mount point:

root # mount
/dev/sda2 on / type ext4 (rw,acl,user_xattr)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw)
devtmpfs on /dev type devtmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,mode=1777)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/sda3 on /home type ext3 (rw)
securityfs on /sys/kernel/security type securityfs (rw)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
gvfs-fuse-daemon on /home/tux/.gvfs type fuse.gvfs-fuse-daemon \
(rw,nosuid,nodev,user=tux)

Obtain information about total usage of the file systems with the command df. The parameter -h (or --human-readable) transforms the output into a form understandable for common users.

tux > df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              20G  5,9G   13G  32% /
devtmpfs              1,6G  236K  1,6G   1% /dev
tmpfs                 1,6G  668K  1,6G   1% /dev/shm
/dev/sda3             208G   40G  159G  20% /home

Display the total size of all the files in a given directory and its subdirectories with the command du. The parameter -s suppresses the output of detailed information and gives only a total for each argument. -h again transforms the output into a human-readable form:

tux > du -sh /opt
192M    /opt

2.8.3 Additional Information about ELF Binaries

Read the content of binaries with the readelf utility. This even works with ELF files that were built for other hardware architectures:

tux > readelf --file-header /bin/ls
ELF Header:
  Magic:   7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
  Class:                             ELF64
  Data:                              2's complement, little endian
  Version:                           1 (current)
  OS/ABI:                            UNIX - System V
  ABI Version:                       0
  Type:                              EXEC (Executable file)
  Machine:                           Advanced Micro Devices X86-64
  Version:                           0x1
  Entry point address:               0x402540
  Start of program headers:          64 (bytes into file)
  Start of section headers:          95720 (bytes into file)
  Flags:                             0x0
  Size of this header:               64 (bytes)
  Size of program headers:           56 (bytes)
  Number of program headers:         9
  Size of section headers:           64 (bytes)
  Number of section headers:         32
  Section header string table index: 31

2.8.4 File Properties: stat

The command stat displays file properties:

tux > stat /etc/profile
  File: `/etc/profile'
  Size: 9662            Blocks: 24         IO Block: 4096   regular file
Device: 802h/2050d      Inode: 132349      Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2009-03-20 07:51:17.000000000 +0100
Modify: 2009-01-08 19:21:14.000000000 +0100
Change: 2009-03-18 12:55:31.000000000 +0100

The parameter --file-system produces details of the properties of the file system in which the specified file is located:

tux > stat /etc/profile --file-system
  File: "/etc/profile"
    ID: d4fb76e70b4d1746 Namelen: 255     Type: ext2/ext3
Block size: 4096       Fundamental block size: 4096
Blocks: Total: 2581445    Free: 1717327    Available: 1586197
Inodes: Total: 655776     Free: 490312

2.9 User Information

2.9.1 User Accessing Files: fuser

It can be useful to determine what processes or users are currently accessing certain files. Suppose, for example, you want to unmount a file system mounted at /mnt. umount returns "device is busy." The command fuser can then be used to determine what processes are accessing the device:

tux > fuser -v /mnt/*

                     USER        PID ACCESS COMMAND
/mnt/notes.txt       tux    26597 f....  less

Following termination of the less process, which was running on another terminal, the file system can successfully be unmounted. When used with -k option, fuser will terminate processes accessing the file as well.

2.9.2 Who Is Doing What: w

With the command w, find out who is logged in to the system and what each user is doing. For example:

tux > w
 16:00:59 up 1 day,  2:41,  3 users,  load average: 0.00, 0.01, 0.05
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
tux      :0       console          Wed13   ?xdm?   8:15   0.03s /usr/lib/gdm/gd
tux      console  :0               Wed13   26:41m  0.00s  0.03s /usr/lib/gdm/gd
tux      pts/0    :0               Wed13   20:11   0.10s  2.89s /usr/lib/gnome-

If any users of other systems have logged in remotely, the parameter -f shows the computers from which they have established the connection.

2.10 Time and Date

2.10.1 Time Measurement with time

Determine the time spent by commands with the time utility. This utility is available in two versions: as a Bash built-in and as a program (/usr/bin/time).

tux >  time find . > /dev/null

real    0m4.051s1
user    0m0.042s2
sys     0m0.205s3

1

The real time that elapsed from the command's start-up until it finished.

2

CPU time of the user as reported by the times system call.

3

CPU time of the system as reported by the times system call.

The output of /usr/bin/time is much more detailed. It is recommended to run it with the -v switch to produce human-readable output.

/usr/bin/time -v find . > /dev/null
        Command being timed: "find ."
        User time (seconds): 0.24
        System time (seconds): 2.08
        Percent of CPU this job got: 25%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:09.03
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 2516
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 1564
        Voluntary context switches: 36660
        Involuntary context switches: 496
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

2.11 Graph Your Data: RRDtool

There are a lot of data in the world around you, which can be easily measured in time. For example, changes in the temperature, or the number of data sent or received by your computer's network interface. RRDtool can help you store and visualize such data in detailed and customizable graphs.

RRDtool is available for most Unix platforms and Linux distributions. SUSE® Linux Enterprise Desktop ships RRDtool as well. Install it either with YaST or by entering

zypper install rrdtool in the command line as root.

Tip
Tip: Bindings

There are Perl, Python, Ruby, and PHP bindings available for RRDtool, so that you can write your own monitoring scripts in your preferred scripting language.

2.11.1 How RRDtool Works

RRDtool is an abbreviation of Round Robin Database tool. Round Robin is a method for manipulating with a constant amount of data. It uses the principle of a circular buffer, where there is no end nor beginning to the data row which is being read. RRDtool uses Round Robin Databases to store and read its data.

As mentioned above, RRDtool is designed to work with data that change in time. The ideal case is a sensor which repeatedly reads measured data (like temperature, speed etc.) in constant periods of time, and then exports them in a given format. Such data are perfectly ready for RRDtool, and it is easy to process them and create the desired output.

Sometimes it is not possible to obtain the data automatically and regularly. Their format needs to be pre-processed before it is supplied to RRDtool, and often you need to manipulate RRDtool even manually.

The following is a simple example of basic RRDtool usage. It illustrates all three important phases of the usual RRDtool workflow: creating a database, updating measured values, and viewing the output.

2.11.2 A Practical Example

Suppose we want to collect and view information about the memory usage in the Linux system as it changes in time. To make the example more vivid, we measure the currently free memory over a period of 40 seconds in 4-second intervals. Three applications that usually consume a lot of system memory are started and closed: the Firefox Web browser, the Evolution e-mail client, and the Eclipse development framework.

2.11.2.1 Collecting Data

RRDtool is very often used to measure and visualize network traffic. In such case, the Simple Network Management Protocol (SNMP) is used. This protocol can query network devices for relevant values of their internal counters. Exactly these values are to be stored with RRDtool. For more information on SNMP, see http://www.net-snmp.org/.

Our situation is different—we need to obtain the data manually. A helper script free_mem.sh repetitively reads the current state of free memory and writes it to the standard output.

tux > cat free_mem.sh
INTERVAL=4
for steps in {1..10}
do
    DATE=`date +%s`
    FREEMEM=`free -b | grep "Mem" | awk '{ print $4 }'`
    sleep $INTERVAL
    echo "rrdtool update free_mem.rrd $DATE:$FREEMEM"
done
  • The time interval is set to 4 seconds, and is implemented with the sleep command.

  • RRDtool accepts time information in a special format - so called Unix time. It is defined as the number of seconds since the midnight of January 1, 1970 (UTC). For example, 1272907114 represents 2010-05-03 17:18:34.

  • The free memory information is reported in bytes with free -b. Prefer to supply basic units (bytes) instead of multiple units (like kilobytes).

  • The line with the echo ... command contains the future name of the database file (free_mem.rrd), and together creates a command line for updating RRDtool values.

After running free_mem.sh, you see an output similar to this:

tux > sh free_mem.sh
rrdtool update free_mem.rrd 1272974835:1182994432
rrdtool update free_mem.rrd 1272974839:1162817536
rrdtool update free_mem.rrd 1272974843:1096269824
rrdtool update free_mem.rrd 1272974847:1034219520
rrdtool update free_mem.rrd 1272974851:909438976
rrdtool update free_mem.rrd 1272974855:832454656
rrdtool update free_mem.rrd 1272974859:829120512
rrdtool update free_mem.rrd 1272974863:1180377088
rrdtool update free_mem.rrd 1272974867:1179369472
rrdtool update free_mem.rrd 1272974871:1181806592

It is convenient to redirect the command's output to a file with

sh free_mem.sh > free_mem_updates.log

to simplify its future execution.

2.11.2.2 Creating the Database

Create the initial Robin Round database for our example with the following command:

tux >  rrdtool create free_mem.rrd --start 1272974834 --step=4 \
DS:memory:GAUGE:600:U:U RRA:AVERAGE:0.5:1:24
Points to Notice
  • This command creates a file called free_mem.rrd for storing our measured values in a Round Robin type database.

  • The --start option specifies the time (in Unix time) when the first value will be added to the database. In this example, it is one less than the first time value of the free_mem.sh output (1272974835).

  • The --step specifies the time interval in seconds with which the measured data will be supplied to the database.

  • The DS:memory:GAUGE:600:U:U part introduces a new data source for the database. It is called memory, its type is gauge, the maximum number between two updates is 600 seconds, and the minimal and maximal value in the measured range are unknown (U).

  • RRA:AVERAGE:0.5:1:24 creates Round Robin archive (RRA) whose stored data are processed with the consolidation functions (CF) that calculates the average of data points. 3 arguments of the consolidation function are appended to the end of the line.

If no error message is displayed, then free_mem.rrd database is created in the current directory:

tux > ls -l free_mem.rrd
-rw-r--r-- 1 tux users 776 May  5 12:50 free_mem.rrd

2.11.2.3 Updating Database Values

After the database is created, you need to fill it with the measured data. In Section 2.11.2.1, “Collecting Data”, we already prepared the file free_mem_updates.log which consists of rrdtool update commands. These commands do the update of database values for us.

tux > sh free_mem_updates.log; ls -l free_mem.rrd
-rw-r--r--  1 tux users  776 May  5 13:29 free_mem.rrd

As you can see, the size of free_mem.rrd remained the same even after updating its data.

2.11.2.4 Viewing Measured Values

We have already measured the values, created the database, and stored the measured value in it. Now we can play with the database, and retrieve or view its values.

To retrieve all the values from our database, enter the following on the command line:

tux > rrdtool fetch free_mem.rrd AVERAGE --start 1272974830 \
--end 1272974871
          memory
1272974832: nan
1272974836: 1.1729059840e+09
1272974840: 1.1461806080e+09
1272974844: 1.0807572480e+09
1272974848: 1.0030243840e+09
1272974852: 8.9019289600e+08
1272974856: 8.3162112000e+08
1272974860: 9.1693465600e+08
1272974864: 1.1801251840e+09
1272974868: 1.1799787520e+09
1272974872: nan
Points to Notice
  • AVERAGE will fetch average value points from the database, because only one data source is defined (Section 2.11.2.2, “Creating the Database”) with AVERAGE processing and no other function is available.

  • The first line of the output prints the name of the data source as defined in Section 2.11.2.2, “Creating the Database”.

  • The left results column represents individual points in time, while the right one represents corresponding measured average values in scientific notation.

  • The nan in the last line stands for not a number.

Now a graph representing the values stored in the database is drawn:

tux > rrdtool graph free_mem.png \
--start 1272974830 \
--end 1272974871 \
--step=4 \
DEF:free_memory=free_mem.rrd:memory:AVERAGE \
LINE2:free_memory#FF0000 \
--vertical-label "GB" \
--title "Free System Memory in Time" \
--zoom 1.5 \
--x-grid SECOND:1:SECOND:4:SECOND:10:0:%X
Points to Notice
  • free_mem.png is the file name of the graph to be created.

  • --start and --end limit the time range within which the graph will be drawn.

  • --step specifies the time resolution (in seconds) of the graph.

  • The DEF:... part is a data definition called free_memory. Its data are read from the free_mem.rrd database and its data source called memory. The average value points are calculated, because no others were defined in Section 2.11.2.2, “Creating the Database”.

  • The LINE... part specifies properties of the line to be drawn into the graph. It is 2 pixels wide, its data come from the free_memory definition, and its color is red.

  • --vertical-label sets the label to be printed along the y axis, and --title sets the main label for the whole graph.

  • --zoom specifies the zoom factor for the graph. This value must be greater than zero.

  • --x-grid specifies how to draw grid lines and their labels into the graph. Our example places them every second, while major grid lines are placed every 4 seconds. Labels are placed every 10 seconds under the major grid lines.

Example Graph Created with RRDtool
Figure 2.1: Example Graph Created with RRDtool

2.11.3 For More Information

RRDtool is a very complex tool with a lot of sub-commands and command line options. Some are easy to understand, but to make it produce the results you want and fine-tune them according to your liking may require a lot of effort.

Apart from RRDtool's man page (man 1 rrdtool) which gives you only basic information, you should have a look at the RRDtool home page. There is a detailed documentation of the rrdtool command and all its sub-commands. There are also several tutorials to help you understand the common RRDtool workflow.

If you are interested in monitoring network traffic, have a look at MRTG (Multi Router Traffic Grapher). MRTG can graph the activity of many network devices. It can use RRDtool.

3 Analyzing and Managing System Log Files

  • Filename: tuning_logfiles.xml
  • ID: cha.tuning.logfiles

System log file analysis is one of the most important tasks when analyzing the system. In fact, looking at the system log files should be the first thing to do when maintaining or troubleshooting a system. SUSE Linux Enterprise Desktop automatically logs almost everything that happens on the system in detail. Since the move to systemd, kernel messages and messages of system services registered with systemd are logged in systemd journal (see Chapter 16, journalctl: Query the systemd Journal). Other log files (mainly those of system applications) are written in plain text and can be easily read using an editor or pager. It is also possible to parse them using scripts. This allows you to filter their content.

3.1 System Log Files in /var/log/

System log files are always located under the /var/log directory. The following list presents an overview of all system log files from SUSE Linux Enterprise Desktop present after a default installation. Depending on your installation scope, /var/log also contains log files from other services and applications not listed here. Some files and directories described below are placeholders and are only used, when the corresponding application is installed. Most log files are only visible for the user root.

apparmor/

AppArmor log files. See Part IV, “Confining Privileges with AppArmor for details of AppArmor.

audit/

Logs from the audit framework. See Part V, “The Linux Audit Framework for details.

ConsoleKit/

Logs of the ConsoleKit daemon (daemon for tracking what users are logged in and how they interact with the computer).

cups/

Access and error logs of the Common Unix Printing System (cups).

firewall

Firewall logs.

gdm/

Log files from the GNOME display manager.

krb5/

Log files from the Kerberos network authentication system.

lastlog

A database containing information on the last login of each user. Use the command lastlog to view. See man 8 lastlog for more information.

localmessages

Log messages of some boot scripts, for example the log of the DHCP client.

mail*

Mail server (postfix, sendmail) logs.

messages

This is the default place where all kernel and system log messages go and should be the first place (along with /var/log/warn) to look at in case of problems.

NetworkManager

NetworkManager log files.

news/

Log messages from a news server.

ntp

Logs from the Network Time Protocol daemon (ntpd).

pk_backend_zypp*

PackageKit (with libzypp back-end) log files.

puppet/

Log files from the data center automation tool puppet.

samba/

Log files from Samba, the Windows SMB/CIFS file server.

warn

Log of all system warnings and errors. This should be the first place (along with the output of the systemd journal) to look in case of problems.

wtmp

Database of all login/logout activities, and remote connections. Use the command last to view. See man 1 last for more information.

xinetd.log

Log files from the extended Internet services daemon (xinetd).

Xorg.0.log

X.Org start-up log file. Refer to this in case you have problems starting X.Org. Copies from previous X.Org starts are numbered Xorg.?.log.

YaST2/

All YaST log files.

zypp/

libzypp log files. Refer to these files for the package installation history.

zypper.log

Logs from the command line installer zypper.

3.2 Viewing and Parsing Log Files

To view log files, you can use any text editor. There is also a simple YaST module for viewing the system log available in the YaST control center under Miscellaneous › System Log.

For viewing log files in a text console, use the commands less or more. Use head and tail to view the beginning or end of a log file. To view entries appended to a log file in real-time use tail -f. For information about how to use these tools, see their man pages.

To search for strings or regular expressions in log files use grep. awk is useful for parsing and rewriting log files.

3.3 Managing Log Files with logrotate

Log files under /var/log grow on a daily basis and quickly become very large. logrotate is a tool that helps you manage log files and their growth. It allows automatic rotation, removal, compression, and mailing of log files. Log files can be handled periodically (daily, weekly, or monthly) or when exceeding a particular size.

logrotate is usually run daily by systemd, and thus usually modifies log files only once a day. However, exceptions occur when a log file is modified because of its size, if logrotate is run multiple times a day, or if --force is enabled. Use /var/lib/misc/logrotate.status to find out when a particular file was last rotated.

The main configuration file of logrotate is /etc/logrotate.conf. System packages and programs that produce log files (for example, apache2) put their own configuration files in the /etc/logrotate.d/ directory. The content of /etc/logrotate.d/ is included via /etc/logrotate.conf.

Example 3.1: Example for /etc/logrotate.conf
# see "man logrotate" for details
# rotate log files weekly
weekly

# keep 4 weeks worth of backlogs
rotate 4

# create new (empty) log files after rotating old ones
create

# use date as a suffix of the rotated file
dateext

# uncomment this if you want your log files compressed
#compress

# comment these to switch compression to use gzip or another
# compression scheme
compresscmd /usr/bin/bzip2
uncompresscmd /usr/bin/bunzip2

# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
Important
Important: Avoid Permission Conflicts

The create option pays heed to the modes and ownerships of files specified in /etc/permissions*. If you modify these settings, make sure no conflicts arise.

3.4 Monitoring Log Files with logwatch

logwatch is a customizable, pluggable log-monitoring script. It parses system logs, extracts the important information and presents them in a human readable manner. To use logwatch, install the logwatch package.

logwatch can either be used at the command line to generate on-the-fly reports, or via cron to regularly create custom reports. Reports can either be printed on the screen, saved to a file, or be mailed to a specified address. The latter is especially useful when automatically generating reports via cron.

On the command line, you can tell logwatch for which service and time span to generate a report and how much detail should be included:

# Detailed report on all kernel messages from yesterday
logwatch --service kernel --detail High --range Yesterday --print

# Low detail report on all sshd events recorded (incl. archived logs)
logwatch --service sshd --detail Low --range All --archives --print

# Mail a report on all smartd messages from May 5th to May 7th to root@localhost
logwatch --service smartd --range 'between 5/5/2005 and 5/7/2005' \
--mailto root@localhost --print

The --range option has got a complex syntax—see logwatch --range help for details. A list of all services that can be queried is available with the following command:

ls /usr/share/logwatch/default.conf/services/ | sed 's/\.conf//g'

logwatch can be customized to great detail. However, the default configuration should usually be sufficient. The default configuration files are located under /usr/share/logwatch/default.conf/. Never change them because they would get overwritten again with the next update. Rather place custom configuration in /etc/logwatch/conf/ (you may use the default configuration file as a template, though). A detailed HOWTO on customizing logwatch is available at /usr/share/doc/packages/logwatch/HOWTO-Customize-LogWatch. The following configuration files exist:

logwatch.conf

The main configuration file. The default version is extensively commented. Each configuration option can be overwritten on the command line.

ignore.conf

Filter for all lines that should globally be ignored by logwatch.

services/*.conf

The service directory holds configuration files for each service you can generate a report for.

logfiles/*.conf

Specifications on which log files should be parsed for each service.

3.5 Using logger to Make System Log Entries

logger is a tool for making entries in the system log. It provides a shell command interface to the rsyslogd system log module. For example, the following line outputs its message in /var/log/messages or directly in the journal (if no logging facility is running):

logger -t Test "This message comes from $USER"

Depending on the current user and host name, the log contains a line similar to this:

Sep 28 13:09:31 venus Test: This message comes from tux

Part III Kernel Monitoring

4 SystemTap—Filtering and Analyzing System Data

SystemTap provides a command line interface and a scripting language to examine the activities of a running Linux system, particularly the kernel, in fine detail. SystemTap scripts are written in the SystemTap scripting language, are then compiled to C-code kernel modules and inserted into the kerne…

5 Kernel Probes

Kernel probes are a set of tools to collect Linux kernel debugging and performance information. Developers and system administrators usually use them either to debug the kernel, or to find system performance bottlenecks. The reported data can then be used to tune the system for better performance.

6 Hardware-Based Performance Monitoring with Perf

Perf is an interface to access the performance monitoring unit (PMU) of a processor and to record and display software events such as page faults. It supports system-wide, per-thread, and KVM virtualization guest monitoring.

7 OProfile—System-Wide Profiler

OProfile is a profiler for dynamic program analysis. It investigates the behavior of a running program and gathers information. This information can be viewed and gives hints for further optimization.

It is not necessary to recompile or use wrapper libraries to use OProfile. Not even a kernel patch is needed. Usually, when profiling an application, a small overhead is expected, depending on the workload and sampling frequency.

4 SystemTap—Filtering and Analyzing System Data

  • Filename: tuning_systemtap.xml
  • ID: cha.tuning.systemtap

SystemTap provides a command line interface and a scripting language to examine the activities of a running Linux system, particularly the kernel, in fine detail. SystemTap scripts are written in the SystemTap scripting language, are then compiled to C-code kernel modules and inserted into the kernel. The scripts can be designed to extract, filter and summarize data, thus allowing the diagnosis of complex performance problems or functional problems. SystemTap provides information similar to the output of tools like netstat, ps, top, and iostat. However, more filtering and analysis options can be used for the collected information.

4.1 Conceptual Overview

Each time you run a SystemTap script, a SystemTap session is started. Several passes are done on the script before it is allowed to run. Then, the script is compiled into a kernel module and loaded. If the script has been executed before and no system components have changed (for example, different compiler or kernel versions, library paths, or script contents), SystemTap does not compile the script again. Instead, it uses the *.c and *.ko data stored in the SystemTap cache (~/.systemtap).

The module is unloaded when the tap has finished running. For an example, see the test run in Section 4.2, “Installation and Setup” and the respective explanation.

4.1.1 SystemTap Scripts

SystemTap usage is based on SystemTap scripts (*.stp). They tell SystemTap which type of information to collect, and what to do once that information is collected. The scripts are written in the SystemTap scripting language that is similar to AWK and C. For the language definition, see http://sourceware.org/systemtap/langref/. A lot of useful example scripts are available from http://www.sourceware.org/systemtap/examples/.

The essential idea behind a SystemTap script is to name events, and to give them handlers. When SystemTap runs the script, it monitors for certain events. When an event occurs, the Linux kernel runs the handler as a sub-routine, then resumes. Thus, events serve as the triggers for handlers to run. Handlers can record specified data and print it in a certain manner.

The SystemTap language only uses a few data types (integers, strings, and associative arrays of these), and full control structures (blocks, conditionals, loops, functions). It has a lightweight punctuation (semicolons are optional) and does not need detailed declarations (types are inferred and checked automatically).

For more information about SystemTap scripts and their syntax, refer to Section 4.3, “Script Syntax” and to the stapprobes and stapfuncs man pages, that are available with the systemtap-docs package.

4.1.2 Tapsets

Tapsets are a library of pre-written probes and functions that can be used in SystemTap scripts. When a user runs a SystemTap script, SystemTap checks the script's probe events and handlers against the tapset library. SystemTap then loads the corresponding probes and functions before translating the script to C. Like SystemTap scripts themselves, tapsets use the file name extension *.stp.

However, unlike SystemTap scripts, tapsets are not meant for direct execution. They constitute the library from which other scripts can pull definitions. Thus, the tapset library is an abstraction layer designed to make it easier for users to define events and functions. Tapsets provide aliases for functions that users could want to specify as an event. Knowing the proper alias is often easier than remembering specific kernel functions that might vary between kernel versions.

4.1.3 Commands and Privileges

The main commands associated with SystemTap are stap and staprun. To execute them, you either need root privileges or must be a member of the stapdev or stapusr group.

stap

SystemTap front-end. Runs a SystemTap script (either from file, or from standard input). It translates the script into C code, compiles it, and loads the resulting kernel module into a running Linux kernel. Then, the requested system trace or probe functions are performed.

staprun

SystemTap back-end. Loads and unloads kernel modules produced by the SystemTap front-end.

For a list of options for each command, use --help. For details, refer to the stap and the staprun man pages.

To avoid giving root access to users solely to enable them to work with SystemTap, use one of the following SystemTap groups. They are not available by default on SUSE Linux Enterprise Desktop, but you can create the groups and modify the access rights accordingly. Also adjust the permissions of the staprun command if the security implications are appropriate for your environment.

stapdev

Members of this group can run SystemTap scripts with stap, or run SystemTap instrumentation modules with staprun. As running stap involves compiling scripts into kernel modules and loading them into the kernel, members of this group still have effective root access.

stapusr

Members of this group are only allowed to run SystemTap instrumentation modules with staprun. In addition, they can only run those modules from /lib/modules/KERNEL_VERSION/systemtap/. This directory must be owned by root and must only be writable for the root user.

4.1.4 Important Files and Directories

The following list gives an overview of the SystemTap main files and directories.

/lib/modules/KERNEL_VERSION/systemtap/

Holds the SystemTap instrumentation modules.

/usr/share/systemtap/tapset/

Holds the standard library of tapsets.

/usr/share/doc/packages/systemtap/examples

Holds several example SystemTap scripts for various purposes. Only available if the systemtap-docs package is installed.

~/.systemtap/cache

Data directory for cached SystemTap files.

/tmp/stap*

Temporary directory for SystemTap files, including translated C code and kernel object.

4.2 Installation and Setup

As SystemTap needs information about the kernel, some additional kernel-related packages must be installed. For each kernel you want to probe with SystemTap, you need to install a set of the following packages. This set should exactly match the kernel version and flavor (indicated by * in the overview below).

Important
Important: Repository for Packages with Debugging Information

If you subscribed your system for online updates, you can find debuginfo packages in the *-Debuginfo-Updates online installation repository relevant for SUSE Linux Enterprise Desktop 12 SP3. Use YaST to enable the repository.

For the classic SystemTap setup, install the following packages (using either YaST or zypper).

  • systemtap

  • systemtap-server

  • systemtap-docs (optional)

  • kernel-*-base

  • kernel-*-debuginfo

  • kernel-*-devel

  • kernel-source-*

  • gcc

To get access to the man pages and to a helpful collection of example SystemTap scripts for various purposes, additionally install the systemtap-docs package.

To check if all packages are correctly installed on the machine and if SystemTap is ready to use, execute the following command as root.

stap -v -e 'probe vfs.read {printf("read performed\n"); exit()}'

It probes the currently used kernel by running a script and returning an output. If the output is similar to the following, SystemTap is successfully deployed and ready to use:

Pass 1: parsed user script and 59 library script(s) in 80usr/0sys/214real ms.
Pass 2: analyzed script: 1 probe(s), 11 function(s), 2 embed(s), 1 global(s) in
 140usr/20sys/412real ms.
Pass 3: translated to C into
 "/tmp/stapDwEk76/stap_1856e21ea1c246da85ad8c66b4338349_4970.c" in 160usr/0sys/408real ms.
Pass 4: compiled C into "stap_1856e21ea1c246da85ad8c66b4338349_4970.ko" in
 2030usr/360sys/10182real ms.
Pass 5: starting run.
 read performed
Pass 5: run completed in 10usr/20sys/257real ms.

1

Checks the script against the existing tapset library in /usr/share/systemtap/tapset/ for any tapsets used. Tapsets are scripts that form a library of pre-written probes and functions that can be used in SystemTap scripts.

2

Examines the script for its components.

3

Translates the script to C. Runs the system C compiler to create a kernel module from it. Both the resulting C code (*.c) and the kernel module (*.ko) are stored in the SystemTap cache, ~/.systemtap.

4

Loads the module and enables all the probes (events and handlers) in the script by hooking into the kernel. The event being probed is a Virtual File System (VFS) read. As the event occurs on any processor, a valid handler is executed (prints the text read performed) and closed with no errors.

5

After the SystemTap session is terminated, the probes are disabled, and the kernel module is unloaded.

In case any error messages appear during the test, check the output for hints about any missing packages and make sure they are installed correctly. Rebooting and loading the appropriate kernel may also be needed.

4.3 Script Syntax

SystemTap scripts consist of the following two components:

SystemTap Events (Probe Points)

Name the kernel events at the associated handler should be executed. Examples for events are entering or exiting a certain function, a timer expiring, or starting or terminating a session.

SystemTap Handlers (Probe Body)

Series of script language statements that specify the work to be done whenever a certain event occurs. This normally includes extracting data from the event context, storing them into internal variables, or printing results.

An event and its corresponding handler is collectively called a probe. SystemTap events are also called probe points. A probe's handler is also called a probe body.

Comments can be inserted anywhere in the SystemTap script in various styles: using either #, /* */, or // as marker.

4.3.1 Probe Format

A SystemTap script can have multiple probes. They must be written in the following format:

probe EVENT {STATEMENTS}

Each probe has a corresponding statement block. This statement block must be enclosed in { } and contains the statements to be executed per event.

Example 4.1: Simple SystemTap Script

The following example shows a simple SystemTap script.

probe1 begin2
{3
   printf4 ("hello world\n")5
   exit ()6
}7

1

Start of the probe.

2

Event begin (the start of the SystemTap session).

3

Start of the handler definition, indicated by {.

4

First function defined in the handler: the printf function.

5

String to be printed by the printf function, followed by a line break (/n).

6

Second function defined in the handler: the exit() function. Note that the SystemTap script will continue to run until the exit() function executes. If you want to stop the execution of the script before, stop it manually by pressing CtrlC.

7

End of the handler definition, indicated by }.

The event begin 2 (the start of the SystemTap session) triggers the handler enclosed in { }. Here, that is the printf function 4. In this case, it prints hello world followed by a new line 5. Then, the script exits.

If your statement block holds several statements, SystemTap executes these statements in sequence—you do not need to insert special separators or terminators between multiple statements. A statement block can also be nested within another statement blocks. Generally, statement blocks in SystemTap scripts use the same syntax and semantics as in the C programming language.

4.3.2 SystemTap Events (Probe Points)

SystemTap supports several built-in events.

The general event syntax is a dotted-symbol sequence. This allows a breakdown of the event namespace into parts. Each component identifier may be parametrized by a string or number literal, with a syntax like a function call. A component may include a * character, to expand to other matching probe points. A probe point may be followed by a ? character, to indicate that it is optional, and that no error should result if it fails to expand. Alternately, a probe point may be followed by a ! character to indicate that it is both optional and sufficient.

SystemTap supports multiple events per probe—they need to be separated by a comma (,). If multiple events are specified in a single probe, SystemTap will execute the handler when any of the specified events occur.

In general, events can be classified into the following categories:

  • Synchronous events: Occur when any process executes an instruction at a particular location in kernel code. This gives other events a reference point (instruction address) from which more contextual data may be available.

    An example for a synchronous event is vfs.FILE_OPERATION: The entry to the FILE_OPERATION event for Virtual File System (VFS). For example, in Section 4.2, “Installation and Setup”, read is the FILE_OPERATION event used for VFS.

  • Asynchronous events: Not tied to a particular instruction or location in code. This family of probe points consists mainly of counters, timers, and similar constructs.

    Examples for asynchronous events are: begin (start of a SystemTap session—when a SystemTap script is run, end (end of a SystemTap session), or timer events. Timer events specify a handler to be executed periodically, like example timer.s(SECONDS), or timer.ms(MILLISECONDS).

    When used together with other probes that collect information, timer events allow you to print periodic updates and see how that information changes over time.

Example 4.2: Probe with Timer Event

For example, the following probe would print the text hello world every 4 seconds:

probe timer.s(4)
{
   printf("hello world\n")
}

For detailed information about supported events, refer to the stapprobes man page. The See Also section of the man page also contains links to other man pages that discuss supported events for specific subsystems and components.

4.3.3 SystemTap Handlers (Probe Body)

Each SystemTap event is accompanied by a corresponding handler defined for that event, consisting of a statement block.

4.3.3.1 Functions

If you need the same set of statements in multiple probes, you can place them in a function for easy reuse. Functions are defined by the keyword function followed by a name. They take any number of string or numeric arguments (by value) and may return a single string or number.

function FUNCTION_NAME(ARGUMENTS) {STATEMENTS}
probe EVENT {FUNCTION_NAME(ARGUMENTS)}

The statements in FUNCTION_NAME are executed when the probe for EVENT executes. The ARGUMENTS are optional values passed into the function.

Functions can be defined anywhere in the script. They may take any

One of the functions needed very often was already introduced in Example 4.1, “Simple SystemTap Script”: the printf function for printing data in a formatted way. When using the printf function, you can specify how arguments should be printed by using a format string. The format string is included in quotation marks and can contain further format specifiers, introduced by a % character.

Which format strings to use depends on your list of arguments. Format strings can have multiple format specifiers—each matching a corresponding argument. Multiple arguments can be separated by a comma.

Example 4.3: printf Function with Format Specifiers
printf ("1%s2(%d3) open\n4", execname(), pid())

1

Start of the format string, indicated by ".

2

String format specifier.

3

Integer format specifier.

4

End of the format string, indicated by ".

The example above prints the current executable name (execname()) as a string and the process ID (pid()) as an integer in brackets. Then, a space, the word open and a line break follow:

[...]
vmware-guestd(2206) open
hald(2360) open
[...]

Apart from the two functions execname()and pid()) used in Example 4.3, “printf Function with Format Specifiers”, a variety of other functions can be used as printf arguments.

Among the most commonly used SystemTap functions are the following:

tid()

ID of the current thread.

pid()

Process ID of the current thread.

uid()

ID of the current user.

cpu()

Current CPU number.

execname()

Name of the current process.

gettimeofday_s()

Number of seconds since Unix epoch (January 1, 1970).

ctime()

Convert time into a string.

pp()

String describing the probe point currently being handled.

thread_indent()

Useful function for organizing print results. It (internally) stores an indentation counter for each thread (tid()). The function takes one argument, an indentation delta, indicating how many spaces to add or remove from the thread's indentation counter. It returns a string with some generic trace data along with an appropriate number of indentation spaces. The generic data returned includes a time stamp (number of microseconds since the initial indentation for the thread), a process name, and the thread ID itself. This allows you to identify what functions were called, who called them, and how long they took.

Call entries and exits often do not immediately precede each other (otherwise it would be easy to match them). In between a first call entry and its exit, usually other call entries and exits are made. The indentation counter helps you match an entry with its corresponding exit as it indents the next function call in case it is not the exit of the previous one. For an example SystemTap script using thread_indent() and the respective output, refer to the SystemTap Tutorial: http://sourceware.org/systemtap/tutorial/Tracing.html#fig:socket-trace.

For more information about supported SystemTap functions, refer to the stapfuncs man page.

4.3.3.2 Other Basic Constructs

Apart from functions, you can use other common constructs in SystemTap handlers, including variables, conditional statements (like if/else, while loops, for loops, arrays or command line arguments.

4.3.3.2.1 Variables

Variables may be defined anywhere in the script. To define one, simply choose a name and assign a value from a function or expression to it:

foo = gettimeofday( )

Then you can use the variable in an expression. From the type of values assigned to the variable, SystemTap automatically infers the type of each identifier (string or number). Any inconsistencies will be reported as errors. In the example above, foo would automatically be classified as a number and could be printed via printf() with the integer format specifier (%d).

However, by default, variables are local to the probe they are used in: They are initialized, used and disposed of at each handler evocation. To share variables between probes, declare them global anywhere in the script. To do so, use the global keyword outside of the probes:

Example 4.4: Using Global Variables
global count_jiffies, count_ms
probe timer.jiffies(100) { count_jiffies ++ }
probe timer.ms(100) { count_ms ++ }
probe timer.ms(12345)
{
  hz=(1000*count_jiffies) / count_ms
  printf ("jiffies:ms ratio %d:%d => CONFIG_HZ=%d\n",
    count_jiffies, count_ms, hz)
  exit ()
  }

This example script computes the CONFIG_HZ setting of the kernel by using timers that count jiffies and milliseconds, then computing accordingly. (A jiffy is the duration of one tick of the system timer interrupt. It is not an absolute time interval unit, since its duration depends on the clock interrupt frequency of the particular hardware platform). With the global statement it is possible to use the variables count_jiffies and count_ms also in the probe timer.ms(12345). With ++ the value of a variable is incremented by 1.

4.3.3.2.2 Conditional Statements

There are several conditional statements that you can use in SystemTap scripts. The following are probably the most common:

If/Else Statements

They are expressed in the following format:

if (CONDITION)1STATEMENT12
else3STATEMENT24

The if statement compares an integer-valued expression to zero. If the condition expression 1 is non-zero, the first statement 2 is executed. If the condition expression is zero, the second statement 4 is executed. The else clause (3 and 4) is optional. Both 2 and 4 can also be statement blocks.

While Loops

They are expressed in the following format:

while (CONDITION)1STATEMENT2

As long as condition is non-zero, the statement 2 is executed. 2 can also be a statement block. It must change a value so condition will eventually be zero.

For Loops

They are a shortcut for while loops and are expressed in the following format:

for (INITIALIZATION1; CONDITIONAL2; INCREMENT3) statement

The expression specified in 1 is used to initialize a counter for the number of loop iterations and is executed before execution of the loop starts. The execution of the loop continues until the loop condition 2 is false. (This expression is checked at the beginning of each loop iteration). The expression specified in 3 is used to increment the loop counter. It is executed at the end of each loop iteration.

Conditional Operators

The following operators can be used in conditional statements:

==:  Is equal to

!=:  Is not equal to

>=:  Is greater than or equal to

<=:  Is less than or equal to

4.4 Example Script

If you have installed the systemtap-docs package, you can find several useful SystemTap example scripts in /usr/share/doc/packages/systemtap/examples.

This section describes a rather simple example script in more detail: /usr/share/doc/packages/systemtap/examples/network/tcp_connections.stp.

Example 4.5: Monitoring Incoming TCP Connections with tcp_connections.stp
#! /usr/bin/env stap

probe begin {
  printf("%6s %16s %6s %6s %16s\n",
         "UID", "CMD", "PID", "PORT", "IP_SOURCE")
}

probe kernel.function("tcp_accept").return?,
      kernel.function("inet_csk_accept").return? {
  sock = $return
  if (sock != 0)
    printf("%6d %16s %6d %6d %16s\n", uid(), execname(), pid(),
           inet_get_local_port(sock), inet_get_ip_source(sock))
}

This SystemTap script monitors the incoming TCP connections and helps to identify unauthorized or unwanted network access requests in real time. It shows the following information for each new incoming TCP connection accepted by the computer:

  • User ID (UID)

  • Command accepting the connection (CMD)

  • Process ID of the command (PID)

  • Port used by the connection (PORT)

  • IP address from which the TCP connection originated (IP_SOUCE)

To run the script, execute

stap /usr/share/doc/packages/systemtap/examples/network/tcp_connections.stp

and follow the output on the screen. To manually stop the script, press CtrlC.

4.5 User Space Probing

For debugging user space applications (like DTrace can do), SUSE Linux Enterprise Desktop 12 SP3 supports user space probing with SystemTap: Custom probe points can be inserted in any user space application. Thus, SystemTap lets you use both kernel space and user space probes to debug the behavior of the whole system.

To get the required utrace infrastructure and the uprobes kernel module for user space probing, you need to install the kernel-trace package in addition to the packages listed in Section 4.2, “Installation and Setup”.

utrace implements a framework for controlling user space tasks. It provides an interface that can be used by various tracing engines, implemented as loadable kernel modules. The engines register callback functions for specific events, then attach to whichever thread they want to trace. As the callbacks are made from safe places in the kernel, this allows for great leeway in the kinds of processing the functions can do. Various events can be watched via utrace, for example, system call entry and exit, fork(), signals being sent to the task, etc. More details about the utrace infrastructure are available at http://sourceware.org/systemtap/wiki/utrace.

SystemTap includes support for probing the entry into and return from a function in user space processes, probing predefined markers in user space code, and monitoring user-process events.

To check if the currently running kernel provides the needed utrace support, use the following command:

 grep CONFIG_UTRACE /boot/config-`uname -r`

For more details about user space probing, refer to https://sourceware.org/systemtap/SystemTap_Beginners_Guide/userspace-probing.html.

4.6 For More Information

This chapter only provides a short SystemTap overview. Refer to the following links for more information about SystemTap:

http://sourceware.org/systemtap/

SystemTap project home page.

http://sourceware.org/systemtap/wiki/

Huge collection of useful information about SystemTap, ranging from detailed user and developer documentation to reviews and comparisons with other tools, or Frequently Asked Questions and tips. Also contains collections of SystemTap scripts, examples and usage stories and lists recent talks and papers about SystemTap.

http://sourceware.org/systemtap/documentation.html

Features a SystemTap Tutorial, a SystemTap Beginner's Guide, a Tapset Developer's Guide, and a SystemTap Language Reference in PDF and HTML format. Also lists the relevant man pages.

You can also find the SystemTap language reference and SystemTap tutorial in your installed system under /usr/share/doc/packages/systemtap. Example SystemTap scripts are available from the example subdirectory.

5 Kernel Probes

  • Filename: tuning_kprobes.xml
  • ID: cha.tuning.kprobes

Kernel probes are a set of tools to collect Linux kernel debugging and performance information. Developers and system administrators usually use them either to debug the kernel, or to find system performance bottlenecks. The reported data can then be used to tune the system for better performance.

You can insert these probes into any kernel routine, and specify a handler to be invoked after a particular break-point is hit. The main advantage of kernel probes is that you no longer need to rebuild the kernel and reboot the system after you make changes in a probe.

To use kernel probes, you typically need to write or obtain a specific kernel module. Such modules include both the init and the exit function. The init function (such as register_kprobe()) registers one or more probes, while the exit function unregisters them. The registration function defines where the probe will be inserted and which handler will be called after the probe is hit. To register or unregister a group of probes at one time, you can use relevant register_<PROBE_TYPE>probes() or unregister_<PROBE_TYPE>probes() functions.

Debugging and status messages are typically reported with the printk kernel routine. printk is a kernel space equivalent of a user space printf routine. For more information on printk, see Logging kernel messages. Normally, you can view these messages by inspecting the output of the systemd journal (see Chapter 16, journalctl: Query the systemd Journal). For more information on log files, see Chapter 3, Analyzing and Managing System Log Files.

5.1 Supported Architectures

Kernel probes are fully implemented on the following architectures:

  • x86

  • AMD64/Intel 64

  • ARM

  • POWER

Kernel probes are partially implemented on the following architectures:

  • IA64 (does not support probes on instruction slot1)

  • sparc64 (return probes not yet implemented)

5.2 Types of Kernel Probes

There are three types of kernel probes: Kprobes, Jprobes, and Kretprobes. Kretprobes are sometimes called return probes. You can find source code examples of all three type of probes in the Linux kernel. See the directory /usr/src/linux/samples/kprobes/ (package kernel-source).

5.2.1 Kprobes

Kprobes can be attached to any instruction in the Linux kernel. When Kprobes is registered, it inserts a break-point at the first byte of the probed instruction. When the processor hits this break-point, the processor registers are saved, and the processing passes to Kprobes. First, a pre-handler is executed, then the probed instruction is stepped, and, finally a post-handler is executed. The control is then passed to the instruction following the probe point.

5.2.2 Jprobes

Jprobes is implemented through the Kprobes mechanism. It is inserted on a function's entry point and allows direct access to the arguments of the function which is being probed. Its handler routine must have the same argument list and return value as the probed function. To end it, call the jprobe_return() function.

When a jprobe is hit, the processor registers are saved, and the instruction pointer is directed to the jprobe handler routine. The control then passes to the handler with the same register contents as the function being probed. Finally, the handler calls the jprobe_return() function, and switches the control back to the control function.

In general, you can insert multiple probes on one function. Jprobe is, however, limited to only one instance per function.

5.2.3 Return Probe

Return probes are also implemented through Kprobes. When the register_kretprobe() function is called, a kprobe is attached to the entry of the probed function. After hitting the probe, the kernel probes mechanism saves the probed function return address and calls a user-defined return handler. The control is then passed back to the probed function.

Before you call register_kretprobe(), you need to set a maxactive argument, which specifies how many instances of the function can be probed at the same time. If set too low, you will miss a certain number of probes.

5.3 Kprobes API

The programming interface of Kprobes consists of functions which are used to register and unregister all used kernel probes, and associated probe handlers. For a more detailed description of these functions and their arguments, see the information sources in Section 5.5, “For More Information”.

register_kprobe()

Inserts a break-point on a specified address. When the break-point is hit, the pre_handler and post_handler are called.

register_jprobe()

Inserts a break-point in the specified address. The address needs to be the address of the first instruction of the probed function. When the break-point is hit, the specified handler is run. The handler should have the same argument list and return type as the probed.

register_kretprobe()

Inserts a return probe for the specified function. When the probed function returns, a specified handler is run. This function returns 0 on success, or a negative error number on failure.

unregister_kprobe(), unregister_jprobe(), unregister_kretprobe()

Removes the specified probe. You can use it any time after the probe has been registered.

register_kprobes(), register_jprobes(), register_kretprobes()

Inserts each of the probes in the specified array.

unregister_kprobes(), unregister_jprobes(), unregister_kretprobes()

Removes each of the probes in the specified array.

disable_kprobe(), disable_jprobe(), disable_kretprobe()

Disables the specified probe temporarily.

enable_kprobe(), enable_jprobe(), enable_kretprobe()

Temporarily enables disabled probes.

5.4 debugfs Interface

In recent Linux kernels, the Kprobes instrumentation uses the kernel's debugfs interface. It can list all registered probes and globally switch all probes on or off.

5.4.1 Listing Registered Kernel Probes

The list of all currently registered probes is in the /sys/kernel/debug/kprobes/list file.

saturn.example.com:~ # cat /sys/kernel/debug/kprobes/list
c015d71a  k  vfs_read+0x0   [DISABLED]
c011a316  j  do_fork+0x0
c03dedc5  r  tcp_v4_rcv+0x0

The first column lists the address in the kernel where the probe is inserted. The second column prints the type of the probe: k for kprobe, j for jprobe, and r for return probe. The third column specifies the symbol, offset and optional module name of the probe. The following optional columns include the status information of the probe. If the probe is inserted on a virtual address which is not valid anymore, it is marked with [GONE]. If the probe is temporarily disabled, it is marked with [DISABLED].

5.4.2 How to Switch All Kernel Probes On or Off

The /sys/kernel/debug/kprobes/enabled file represents a switch with which you can globally and forcibly turn on or off all the registered kernel probes. To turn them off, simply enter

echo "0" > /sys/kernel/debug/kprobes/enabled

on the command line as root. To turn them on again, enter

echo "1" > /sys/kernel/debug/kprobes/enabled

Note that this way you do not change the status of the probes. If a probe is temporarily disabled, it will not be enabled automatically but will remain in the [DISABLED] state after entering the latter command.

5.5 For More Information

To learn more about kernel probes, look at the following sources of information:

  • Thorough but more technically oriented information about kernel probes is in /usr/src/linux/Documentation/kprobes.txt (package kenrel-source).

  • Examples of all three types of probes (together with related Makefile) are in the /usr/src/linux/samples/kprobes/ directory (package kenrel-source).

  • In-depth information about Linux kernel modules and printk kernel routine is in The Linux Kernel Module Programming Guide

  • Practical but slightly outdated information about the use of kernel probes can be found in Kernel debugging with Kprobes

6 Hardware-Based Performance Monitoring with Perf

  • Filename: tuning_perf.xml
  • ID: cha.perf
Abstract

Perf is an interface to access the performance monitoring unit (PMU) of a processor and to record and display software events such as page faults. It supports system-wide, per-thread, and KVM virtualization guest monitoring.

You can store resulting information in a report. This report contains information about, for example, instruction pointers or what code a thread was executing.

Perf consists of two parts:

  • Code integrated into the Linux kernel that is responsible for instructing the hardware.

  • The perf user space utility that allows you to use the kernel code and helps you analyze gathered data.

6.1 Hardware-Based Monitoring

Performance monitoring means collecting information related to how an application or system performs. This information can be obtained either through software-based means or from the CPU or chipset. Perf integrates both of these methods.

Many modern processors contain a performance monitoring unit (PMU). The design and functionality of a PMU is CPU-specific. For example, the number of registers, counters and features supported will vary by CPU implementation.

Each PMU model consists of a set of registers: the performance monitor configuration (PMC) and the performance monitor data (PMD). Both can be read, but only PMCs are writable. These registers store configuration information and data.

6.2 Sampling and Counting

Perf supports several profiling modes:

  • Counting.  Count the number of occurrences of an event.

  • Event-Based Sampling.  A less exact way of counting: A sample is recorded whenever a certain threshold number of events has occurred.

  • Time-Based Sampling.  A less exact way of counting: A sample is recorded in a defined frequency.

  • Instruction-Based Sampling (AMD64 only).  The processor follows instructions appearing in a given interval and samples which events they produce. This allows following up on individual instructions and seeing which of them is critical to performance.

6.3 Installing Perf

The Perf kernel code is already included with the default kernel. To be able to use the user space utility, install the package perf.

6.4 Perf Subcommands

To gather the required information, the perf tool has several subcommands. This section gives an overview of the most often used commands.

To see help in the form of a man page for any of the subcommands, use either perf helpSUBCOMMAND or man perf-SUBCOMMAND.

perf stat

Start a program and create a statistical overview that is displayed after the program quits. perf stat is used to count events.

perf record

Start a program and create a report with performance counter information. The report is stored as perf.data in the current directory. perf record is used to sample events.

perf report

Display a report that was previously created with perf record.

perf annotate

Display a report file and an annotated version of the executed code. If debug symbols are installed, you will also see the source code displayed.

perf list

List event types that Perf can report with the current kernel and with your CPU. You can filter event types by category—for example, to see hardware events only, use perf list hw.

The man page for perf_event_open has short descriptions for the most important events. For example, to find a description of the event branch-misses, search for BRANCH_MISSES (note the spelling differences):

tux > man perf_event_open | grep -A5 BRANCH_MISSES

Sometimes, events may be ambiguous. Note that the lowercase hardware event names are not the name of raw hardware events but instead the name of aliases created by Perf. These aliases map to differently named but similarly defined hardware events on each supported processor.

For example, the cpu-cycles event is mapped to the hardware event UNHALTED_CORE_CYCLES on Intel processors. On AMD processors, however, it is mapped to the hardware event CPU_CLK_UNHALTED.

Perf also allows measuring raw events specific to your hardware. To look up their descriptions, see the Architecture Software Developer's Manual of your CPU vendor. The relevant documents for AMD64/Intel 64 processors are linked to in Section 6.7, “For More Information”.

perf top

Display system activity as it happens.

perf trace

This command behaves similarly to strace. With this subcommand, you can see which system calls are executed by a particular thread or process and which signals it receives.

6.5 Counting Particular Types of Event

To count the number of occurrences of an event, such as those displayed by perf list, use:

root # perf stat -e EVENT -a

To count multiple types of events at once, list them separated by commas. For example, to count cpu-cycles and instructions, use:

root # perf stat -e cpu-cycles,instructions -a

To stop the session, press CtrlC.

You can also count the number of occurrences of an event within a particular time:

root # perf stat -e EVENT -a -- sleep TIME

Replace TIME by a value in seconds.

6.6 Recording Events Specific to Particular Commands

There are various ways to sample events specific to a particular command:

  • To create a report for a newly invoked command, use:

    root # perf record COMMAND

    Then, use the started process normally. When you quit the process, the Perf session will also stop.

  • To create a report for the entire system while a newly invoked command is running, use:

    root # perf record -a COMMAND

    Then, use the started process normally. When you quit the process, the Perf session will also stop.

  • To create a report for an already running process, use:

    root # perf record -p PID

    Replace PID with a process ID. To stop the session, press CtrlC.

Now you can view the gathered data (perf.data) using:

tux > perf report

This will open a pseudo-graphical interface. To receive help, press H. To quit, press Q.

If you prefer a graphical interface, try the GTK+ interface of Perf:

tux > perf report --gtk

However, note that the GTK+ interface is very limited in functionality.

6.7 For More Information

This chapter only provides a short overview. Refer to the following links for more information:

https://perf.wiki.kernel.org/index.php/Main_Page

The project home page. It also features a tutorial on using perf.

http://www.brendangregg.com/perf.html

Unofficial page with many one-line examples of how to use perf.

http://web.eece.maine.edu/~vweaver/projects/perf_events/

Unofficial page with several resources, mostly relating to the Linux kernel code of Perf and its API. This page includes, for example, a CPU compatibility table and a programming guide.

https://www-ssl.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-vol-3b-part-2-manual.pdf

The Intel Architectures Software Developer's Manual, Volume 3B.

https://support.amd.com/TechDocs/24593.pdf

The AMD Architecture Programmer's Manual, Volume 2.

Chapter 7, OProfile—System-Wide Profiler

Consult this chapter for other performance optimizations.

7 OProfile—System-Wide Profiler

  • Filename: tuning_oprofile.xml
  • ID: cha.tuning.oprofile
Abstract

OProfile is a profiler for dynamic program analysis. It investigates the behavior of a running program and gathers information. This information can be viewed and gives hints for further optimization.

It is not necessary to recompile or use wrapper libraries to use OProfile. Not even a kernel patch is needed. Usually, when profiling an application, a small overhead is expected, depending on the workload and sampling frequency.

7.1 Conceptual Overview

OProfile consists of a kernel driver and a daemon for collecting data. It uses the hardware performance counters provided on many processors. OProfile is capable of profiling all code including the kernel, kernel modules, kernel interrupt handlers, system shared libraries, and other applications.

Modern processors support profiling through the hardware by performance counters. Depending on the processor, there can be many counters and each of these can be programmed with an event to count. Each counter has a value which determines how often a sample is taken. The lower the value, the more often it is used.

During the post-processing step, all information is collected and instruction addresses are mapped to a function name.

7.2 Installation and Requirements

To use OProfile, install the oprofile package that is included with the SLE SDK. OProfile works on AMD64/Intel 64, z Systems, and POWER processors. To find out how to install software from the SDK, refer to Section 11.4, “SUSE Software Development Kit (SDK) 12 SP3.

It is useful to install the *-debuginfo package for the respective application you want to profile. If you want to profile the kernel, you need the debuginfo package as well.

7.3 Available OProfile Utilities

OProfile contains several utilities to handle the profiling process and its profiled data. The following list is a short summary of programs used in this chapter:

opannotate

Outputs annotated source or assembly listings mixed with profile information. An annotated report can be used in combination with addr2line to identify the source file and line where hotspots potentially exist. See man addr2line for more information.

opcontrol

Controls the profiling sessions (start or stop), dumps profile data, and sets up parameters.

ophelp

Lists available events with short descriptions.

opimport

Converts sample database files from a foreign binary format to the native format.

opreport

Generates reports from profiled data.

7.4 Using OProfile

With OProfile, you can profile both the kernel and applications. When profiling the kernel, tell OProfile where to find the vmlinuz* file. Use the --vmlinux option and point it to vmlinuz* (usually in /boot). If you need to profile kernel modules, OProfile does this by default. However, make sure you read http://oprofile.sourceforge.net/doc/kernel-profiling.html.

Applications usually do not need to profile the kernel, therefore you should use the --no-vmlinux option to reduce the amount of information.

7.4.1 Creating a Report

Starting the daemon, collecting data, stopping the daemon, and creating a report.

  1. Open a shell and log in as root.

  2. Decide if you want to profile with or without the Linux kernel:

    1. Profile With the Linux Kernel.  Execute the following commands, because opcontrol can only work with uncompressed images:

      cp /boot/vmlinux-`uname -r`.gz /tmp
      gunzip /tmp/vmlinux*.gz
      opcontrol --vmlinux=/tmp/vmlinux*
    2. Profile Without the Linux Kernel.  Use the following command:

      opcontrol --no-vmlinux

      To see which functions call other functions in the output, additionally use the --callgraph option and set a maximum DEPTH:

      opcontrol --no-vmlinux --callgraph DEPTH
  3. Start the OProfile daemon:

    opcontrol --start
    Using 2.6+ OProfile kernel interface.
    Using log file /var/lib/oprofile/samples/oprofiled.log
    Daemon started.
    Profiler running.
  4. Now start the application you want to profile.

  5. Stop the OProfile daemon:

    opcontrol --stop
  6. Dump the collected data to /var/lib/oprofile/samples:

    opcontrol --dump
  7. Create a report:

    opreport
    Overflow stats not available
    CPU: CPU with timer interrupt, speed 0 MHz (estimated)
    Profiling through timer interrupt
              TIMER:0|
      samples|      %|
    ------------------
        84877 98.3226 no-vmlinux
    ...
  8. Shut down the oprofile daemon:

    opcontrol --shutdown

7.4.2 Getting Event Configurations

The general procedure for event configuration is as follows:

  1. Use first the events CPU-CLK_UNHALTED and INST_RETIRED to find optimization opportunities.

  2. Use specific events to find bottlenecks. To list them, use the command opcontrol --list-events.

If you need to profile certain events, first check the available events supported by your processor with the ophelp command (example output generated from Intel Core i5 CPU):

ophelp
oprofile: available events for CPU type "Intel Architectural Perfmon"

See Intel 64 and IA-32 Architectures Software Developer's Manual
Volume 3B (Document 253669) Chapter 18 for architectural perfmon events
This is a limited set of fallback events because oprofile does not know your CPU
CPU_CLK_UNHALTED: (counter: all))
        Clock cycles when not halted (min count: 6000)
INST_RETIRED: (counter: all))
        number of instructions retired (min count: 6000)
LLC_MISSES: (counter: all))
        Last level cache demand requests from this core that missed the LLC (min count: 6000)
        Unit masks (default 0x41)
        ----------
        0x41: No unit mask
LLC_REFS: (counter: all))
        Last level cache demand requests from this core (min count: 6000)
        Unit masks (default 0x4f)
        ----------
        0x4f: No unit mask
BR_MISS_PRED_RETIRED: (counter: all))
        number of mispredicted branches retired (precise) (min count: 500)

You can get the same output from opcontrol --list-events.

Specify the performance counter events with the option --event. Multiple options are possible. This option needs an event name (from ophelp) and a sample rate, for example:

opcontrol --event=CPU_CLK_UNHALTED:100000
Warning
Warning: Setting Sampling Rates with CPU_CLK_UNHALTED

Setting low sampling rates can seriously impair the system performance while high sample rates can disrupt the system to such a high degree that the data is useless. It is recommended to tune the performance metric for being monitored with and without OProfile and to experimentally determine the minimum sample rate that disrupts the performance the least.

7.5 Using OProfile's GUI

The GUI for OProfile can be started as root with oprof_start, see Figure 7.1, “GUI for OProfile”. Select your events and change the counter, if necessary. Every green line is added to the list of checked events. Hover the mouse over the line to see a help text in the status line below. Use the Configuration tab to set the buffer and CPU size, the verbose option and others. Click Start to execute OProfile.

GUI for OProfile
Figure 7.1: GUI for OProfile

7.6 Generating Reports

Before generating a report, make sure OProfile has dumped your data to the /var/lib/oprofile/samples directory using the command opcontrol --dump. A report can be generated with the commands opreport or opannotate.

Calling opreport without any options gives a complete summary. With an executable as an argument, retrieve profile data only from this executable. If you analyze applications written in C++, use the --demangle smart option.

The opannotate generates output with annotations from source code. Run it with the following options:

opannotate --source \
   --base-dirs=BASEDIR \
   --search-dirs= \
   --output-dir=annotated/ \
   /lib/libfoo.so

The option --base-dir contains a comma separated list of paths which is stripped from debug source files. These paths were searched prior to looking in --search-dirs. The --search-dirs option is also a comma separated list of directories to search for source files.

Note
Note: Inaccuracies in Annotated Source

Because of compiler optimization, code can disappear and appear in a different place. Use the information in http://oprofile.sourceforge.net/doc/debug-info.html to fully understand its implications.

7.7 For More Information

This chapter only provides a short overview. Refer to the following links for more information:

http://oprofile.sourceforge.net

The project home page.

Manpages

Details descriptions about the options of the different tools.

/usr/share/doc/packages/oprofile/oprofile.html

Contains the OProfile manual.

http://developer.intel.com/

Architecture reference for Intel processors.

http://www-01.ibm.com/chips/techlib/techlib.nsf/productfamilies/PowerPC/

Architecture reference for PowerPC64 processors in IBM iSeries, pSeries, and Blade server systems.

Part IV Resource Management

8 General System Resource Management

Tuning the system is not only about optimizing the kernel or getting the most out of your application, it begins with setting up a lean and fast system. The way you set up your partitions and file systems can influence the server's speed. The number of active services and the way routine tasks are scheduled also affects performance.

9 Kernel Control Groups

Kernel Control Groups (abbreviated known as cgroups) are a kernel feature that allows aggregating or partitioning tasks (processes) and all their children into hierarchical organized groups. These hierarchical groups can be configured to show a specialized behavior that helps with tuning the system to make best use of available hardware and network resources.

In the following sections, we often reference kernel documentation such as /usr/src/linux/Documentation/cgroups/. These files are part of the kernel-source package.

This chapter is an overview. To use cgroups properly and to avoid performance implications, you must study the provided references.

10 Automatic Non-Uniform Memory Access (NUMA) Balancing

There are physical limitations to hardware that are encountered when many CPU and lots of memory are required. In this chapter, the important limitation is that there is limited communication bandwidth between the CPUs and the memory. One architecture modification that was introduced to address this is Non-Uniform Memory Access (NUMA).

In this configuration, there are multiple nodes. Each of the nodes contains a subset of all CPUs and memory. The access speed to main memory is determined by the location of the memory relative to the CPU. The performance of a workload depends on the application threads accessing data that is local to the CPU the thread is executing on. Automatic NUMA Balancing is a new feature of SLE 12. Automatic NUMA Balancing migrates data on demand to memory nodes that are local to the CPU accessing that data. Depending on the workload, this can dramatically boost performance when using NUMA hardware.

11 Power Management

Power management aims at reducing operating costs for energy and cooling systems while at the same time keeping the performance of a system at a level that matches the current requirements. Thus, power management is always a matter of balancing the actual performance needs and power saving options for a system. Power management can be implemented and used at different levels of the system. A set of specifications for power management functions of devices and the operating system interface to them has been defined in the Advanced Configuration and Power Interface (ACPI). As power savings in server environments can primarily be achieved at the processor level, this chapter introduces some main concepts and highlights some tools for analyzing and influencing relevant parameters.

8 General System Resource Management

  • Filename: tuning_systemresources.xml
  • ID: cha.tuning.resources
Abstract

Tuning the system is not only about optimizing the kernel or getting the most out of your application, it begins with setting up a lean and fast system. The way you set up your partitions and file systems can influence the server's speed. The number of active services and the way routine tasks are scheduled also affects performance.

8.1 Planning the Installation

A carefully planned installation ensures that the system is set up exactly as you need it for the given purpose. It also saves considerable time when fine tuning the system. All changes suggested in this section can be made in the Installation Settings step during the installation. See Section 3.13, “Installation Settings” for details.

8.1.1 Partitioning

Depending on the server's range of applications and the hardware layout, the partitioning scheme can influence the machine's performance (although to a lesser extent only). It is beyond the scope of this manual to suggest different partitioning schemes for particular workloads. However, the following rules will positively affect performance. They do not apply when using an external storage system.

  • Make sure there always is some free space available on the disk, since a full disk delivers inferior performance

  • Disperse simultaneous read and write access onto different disks by, for example:

    • using separate disks for the operating system, data, and log files

    • placing a mail server's spool directory on a separate disk

    • distributing the user directories of a home server between different disks

8.1.2 Installation Scope

The installation scope has no direct influence on the machine's performance, but a carefully chosen scope of packages has advantages. It is recommended to install the minimum of packages needed to run the server. A system with a minimum set of packages is easier to maintain and has fewer potential security issues. Furthermore, a tailor made installation scope also ensures that no unnecessary services are started by default.

SUSE Linux Enterprise Desktop lets you customize the installation scope on the Installation Summary screen. By default, you can select or remove preconfigured patterns for specific tasks, but it is also possible to start the YaST Software Manager for a fine-grained package-based selection.

One or more of the following default patterns may not be needed in all cases:

GNOME Desktop Environment

Servers rarely need a full desktop environment. In case a graphical environment is needed, a more economical solution such as IceWM can be sufficient.

X Window System

When solely administrating the server and its applications via command line, consider not installing this pattern. However, keep in mind that it is needed to run GUI applications from a remote machine. If your application is managed by a GUI or if you prefer the GUI version of YaST, keep this pattern.

Print Server

This pattern is only needed if you want to print from the machine.

8.1.3 Default Target

A running X Window System consumes many resources and is rarely needed on a server. It is strongly recommended to start the system in target multi-user.target. You will still be able to remotely start graphical applications.

8.2 Disabling Unnecessary Services

The default installation starts several services (the number varies with the installation scope). Since each service consumes resources, it is recommended to disable the ones not needed. Run YaST › System › Services Manager to start the services management module.

If you are using the graphical version of YaST, you can click the column headlines to sort the list of services. Use this to get an overview of which services are currently running. Use the Start/Stop button to disable the service for the running session. To permanently disable it, use the Enable/Disable button.

The following list shows services that are started by default after the installation of SUSE Linux Enterprise Desktop. Check which of the components you need, and disable the others:

alsasound

Loads the Advanced Linux Sound System.

auditd

A daemon for the Audit system (see Part V, “The Linux Audit Framework for details). Disable this if you do not use Audit.

bluez-coldplug

Handles cold plugging of Bluetooth dongles.

cups

A printer daemon.

java.binfmt_misc

Enables the execution of *.class or *.jar Java programs.

nfs

Services needed to mount NFS.

smbfs

Services needed to mount SMB/CIFS file systems from a Windows* server.

splash / splash_early

Shows the splash screen on start-up.

8.3 File Systems and Disk Access

Hard disks are the slowest components in a computer system and therefore often the cause for a bottleneck. Using the file system that best suits your workload helps to improve performance. Using special mount options or prioritizing a process's I/O priority are further means to speed up the system.

8.3.1 File Systems

SUSE Linux Enterprise Desktop ships with several file systems, including BtrFS, Ext4, Ext3, Ext2, ReiserFS, and XFS. Each file system has its own advantages and disadvantages.

8.3.1.1 NFS

NFS (Version 3) tuning is covered in detail in the NFS Howto at http://nfs.sourceforge.net/nfs-howto/. The first thing to experiment with when mounting NFS shares is increasing the read write blocksize to 32768 by using the mount options wsize and rsize.

8.3.2 Time Stamp Update Policy

Each file and directory in a file system has three time stamps associated with it: a time when the file was last read called access time, a time when the file data was last modified called modification time, and a time when the file metadata was last modified called change time. Keeping access time always up to date has significant performance overhead since every read-only access will incur a write operation. Thus by default every file system updates access time only if current file access time is older than a day or if it is older than file modification or change time. This feature is called relative access time and the corresponding mount option is relatime. Updates of access time can be completely disabled using the noatime mount option, however you need to verify your applications do not use it. This can be true for file and Web servers or for network storage. If the default relative access time update policy is not suitable for your applications, use the strictatime mount option.

Some file systems (for example Ext4) also support lazy time stamp updates. When this feature is enabled using the lazytime mount option, updates of all time stamps happen in memory but they are not written to disk. That happens only in response to fsync or sync system calls, when the file information is written due to another reason such as file size update, when time stamps are older than 24 hours, or when cached file information needs to be evicted from memory.

To update mount options used for a file system, either edit /etc/fstab directly, or use the Fstab Options dialog when editing or adding a partition with the YaST Partitioner.

8.3.3 Prioritizing Disk Access with ionice

The ionice command lets you prioritize disk access for single processes. This enables you to give less I/O priority to background processes with heavy disk access that are not time-critical, such as backup jobs. ionice also lets you raise the I/O priority for a specific process to make sure this process always has immediate access to the disk. The caveat of this feature is that standard writes are cached in the page cache and are written back to persistent storage only later by an independent kernel process. Thus the I/O priority setting generally does not apply for these writes. Also be aware that I/O class and priority setting is obeyed only by CFQ I/O scheduler (refer to Section 12.2, “Available I/O Elevators”). You can set the following three scheduling classes:

Idle

A process from the idle scheduling class is only granted disk access when no other process has asked for disk I/O.

Best effort

The default scheduling class used for any process that has not asked for a specific I/O priority. Priority within this class can be adjusted to a level from 0 to 7 (with 0 being the highest priority). Programs running at the same best-effort priority are served in a round-robin fashion. Some kernel versions treat priority within the best-effort class differently—for details, refer to the ionice(1) man page.

Real-time

Processes in this class are always granted disk access first. Fine-tune the priority level from 0 to 7 (with 0 being the highest priority). Use with care, since it can starve other processes.

For more details and the exact command syntax refer to the ionice(1) man page. If you need more reliable control over bandwidth available to each application, use Kernel Control Groups as described in Section 9.3, “Control Group Subsystems”.

9 Kernel Control Groups

  • Filename: tuning_cgroups.xml
  • ID: cha.tuning.cgroups
Abstract

Kernel Control Groups (abbreviated known as cgroups) are a kernel feature that allows aggregating or partitioning tasks (processes) and all their children into hierarchical organized groups. These hierarchical groups can be configured to show a specialized behavior that helps with tuning the system to make best use of available hardware and network resources.

In the following sections, we often reference kernel documentation such as /usr/src/linux/Documentation/cgroups/. These files are part of the kernel-source package.

This chapter is an overview. To use cgroups properly and to avoid performance implications, you must study the provided references.

9.1 Technical Overview and Definitions

The following terms are used in this chapter:

  • cgroup is another name for Control Groups.

  • In a cgroup there is a set of tasks (processes) associated with a set of subsystems that act as parameters constituting an environment for the tasks.

  • Subsystems provide the parameters that can be assigned and define CPU sets, freezer, or—more general—resource controllers for memory, disk I/O, network traffic, etc.

  • cgroups are organized in a tree-structured hierarchy. There can be more than one hierarchy in the system. You use a different or alternate hierarchy to cope with specific situations.

  • Every task running in the system is in exactly one of the cgroups in the hierarchy.

9.2 Scenario

See the following resource planning scenario for a better understanding (source: /usr/src/linux/Documentation/cgroups/cgroups.txt):

Resource Planning
Figure 9.1: Resource Planning

Web browsers such as Firefox will be part of the Web network class, while the NFS daemons such as (k)nfsd will be part of the NFS network class. On the other side, Firefox will share appropriate CPU and memory classes depending on whether a professor or student started it.

9.3 Control Group Subsystems

The following subsystems are available: cpuset, cpu, cpuacct, memory, devices, freezer, net_cls, net_prio, blkio, perf_event, and hugetlbt.

Either mount each subsystem separately, for example:

mkdir /cpuset /cpu
mount -t cgroup -o cpuset      none /cpuset
mount -t cgroup -o cpu,cpuacct none /cpu

or all subsystems in one go; you can use an arbitrary device name (for example none), which will appear in /proc/mounts, for example:

mount -t cgroup none /sys/fs/cgroup

Some additional information on available subsystems:

net_cls (Identification)

The Network classifier cgroup helps with providing identification for controlling processes such as Traffic Controller (tc) or Netfilter (iptables). These controller tools can act on tagged network packets.

For more information, see /usr/src/linux/Documentation/cgroups/net_cls.txt.

net_prio (Identification)

The Network priority cgroup helps with setting the priority of network packets.

For more information, see /usr/src/linux/Documentation/cgroups/net_prio.txt.

devices (Isolation)

A system administrator can provide a list of devices that can be accessed by processes under cgroups.

It limits access to a device or a file system on a device to only tasks that belong to the specified cgroup. For more information, see /usr/src/linux/Documentation/cgroups/devices.txt.

freezer (Control)

The freezer subsystem is useful for high-performance computing clusters (HPC clusters). Use it to freeze (stop) all tasks in a group or to stop tasks, if they reach a defined checkpoint. For more information, see /usr/src/linux/Documentation/cgroups/freezer-subsystem.txt.

Here are basic commands to use the freezer subsystem:

mount -t cgroup -o freezer freezer /freezer
# Create a child cgroup:
mkdir /freezer/0
# Put a task into this cgroup:
echo $task_pid > /freezer/0/tasks
# Freeze it:
echo FROZEN > /freezer/0/freezer.state
# Unfreeze (thaw) it:
echo THAWED > /freezer/0/freezer.state
perf_event (Control)

perf_event collects performance data.

cpuset (Isolation)

Use cpuset to tie processes to system subsets of CPUs and memory (memory nodes). For an example, see Section 9.4.2, “Example: Cpusets”.

cpuacct (Accounting)

The CPU accounting controller groups tasks using cgroups and accounts the CPU usage of these groups. For more information, see /usr/src/linux/Documentation/cgroups/cpuacct.txt.

memory (Resource Control)
  • Tracking or limiting memory usage of user space processes.

  • Control swap usage by setting swapaccount=1 as a kernel boot parameter.

  • Limit LRU (Least Recently Used) pages.

  • Anonymous and file cache.

  • No limits for kernel memory.

  • Maybe in another subsystem if needed.

Note
Note: Protection from Memory Pressure

Memory cgroup now offers a mechanism allowing easier workload opt-in isolation. Memory cgroup can define its so called low limit (memory.low_limit_in_bytes), which works as a protection from memory pressure. For workloads that need to be isolated from outside memory management activity, the value should be set to the expected Resident Set Size (RSS) plus some head room. If a memory pressure condition triggers on the system and the particular group is still under its low limit, its memory is protected from reclaim. As a result, workloads outside of the cgroup do not need the aforementioned capping.

For more information, see /usr/src/linux/Documentation/cgroups/memory.txt.

hugetlb (Resource Control)

The HugeTLB controller manages the memory allocated to huge pages.

For more information, see /usr/src/linux/Documentation/cgroups/hugetlb.txt.

cpu (Control)

Share CPU bandwidth between groups with the group scheduling function of CFS (the scheduler). Mechanically complicated.

Blkio (Resource Control)

The Block IO controller is available as a disk I/O controller. With the blkio controller you can currently set policies for proportional bandwidth and for throttling.

These are the basic commands to configure proportional weight division of bandwidth by setting weight values in blkio.weight:

# Setup in /sys/fs/cgroup
mkdir /sys/fs/cgroup/blkio
mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
# Start two cgroups
mkdir -p /sys/fs/cgroup/blkio/group1 /sys/fs/cgroup/blkio/group2
# Set weights
echo 1000 > /sys/fs/cgroup/blkio/group1/blkio.weight
echo  500 > /sys/fs/cgroup/blkio/group2/blkio.weight
# Write the PIDs of the processes to be controlled to the
# appropriate groups
COMMAND1 &
echo $! > /sys/fs/cgroup/blkio/group1/tasks

COMMAND2 &
echo $! > /sys/fs/cgroup/blkio/group2/tasks

These are the basic commands to configure throttling or upper limit policy by setting values in blkio.throttle.read_bps_device for reads and blkio.throttle.write_bps_device for writes:

# Setup in /sys/fs/cgroup
mkdir /sys/fs/cgroup/blkio
mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
# Bandwidth rate of a device for the root group; format:
# <major>:<minor>  <byes_per_second>
echo "8:16  1048576" > /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device

For more information about caveats, usage scenarios, and additional parameters, see /usr/src/linux/Documentation/cgroups/blkio-controller.txt.

9.4 Using Controller Groups

9.4.1 Prerequisites

To conveniently use cgroups, install the following additional packages:

  • libcgroup-tools — basic user space tools to simplify resource management

  • libcgroup1 — control groups management library

  • cpuset — contains the cset to manipulate cpusets

  • libcpuset1 — C API to cpusets

  • kernel-source — only needed for documentation purposes

9.4.2 Example: Cpusets

With the command line proceed as follows:

  1. To determine the number of CPUs and memory nodes see /proc/cpuinfo and /proc/zoneinfo.

  2. Create the cpuset hierarchy as a virtual file system (source: /usr/src/linux/Documentation/cgroups/cpusets.txt):

    mount -t cgroup -ocpuset cpuset /sys/fs/cgroup/cpuset
    cd /sys/fs/cgroup/cpuset
    mkdir Charlie
    cd Charlie
    # List of CPUs in this cpuset:
    echo 2-3 > cpuset.cpus
    # List of memory nodes in this cpuset:
    echo 1 > cpuset.mems
    echo $$ > tasks
    # The subshell 'sh' is now running in cpuset Charlie
    # The next line should display '/Charlie'
    cat /proc/self/cpuset
  3. Remove the cpuset using shell commands:

    rmdir /sys/fs/cgroup/cpuset/Charlie

    This fails as long as this cpuset is in use. First, you must remove the inside cpusets or tasks (processes) that belong to it. Check it with:

    cat /sys/fs/cgroup/cpuset/Charlie/tasks

For background information and additional configuration flags, see /usr/src/linux/Documentation/cgroups/cpusets.txt.

With the cset tool, proceed as follows:

# Determine the number of CPUs and memory nodes
cset set --list
# Creating the cpuset hierarchy
cset set --cpu=2-3 --mem=1 --set=Charlie
# Starting processes in a cpuset
cset proc --set Charlie --exec -- stress -c 1 &
# Moving existing processes to a cpuset
cset proc --move --pid PID --toset=Charlie
# List task in a cpuset
cset proc --list --set Charlie
# Removing a cpuset
cset set --destroy Charlie

9.4.3 Example: cgroups

Using shell commands, proceed as follows:

  1. Create the cgroups hierarchy:

    mount -t cgroup cgroup /sys/fs/cgroup
    cd /sys/fs/cgroup/cpuset/cgroup
    mkdir priority
    cd priority
    cat cpu.shares
  2. Understanding cpu.shares:

    • 1024 is the default (for more information, see /Documentation/scheduler/sched-design-CFS.txt) = 50% usage

    • 1524 = 60% usage

    • 2048 = 67% usage

    • 512 = 40% usage

  3. Changing cpu.shares

    echo 1024 > cpu.shares

9.4.4 Setting Directory and File Permissions

This is a simple example. Use the following in /etc/cgconfig.conf:

group foo {
        perm {
                task {
                        uid = root;
                        gid = users;
                        fperm = 660;
                }
                admin {
                        uid = root;
                        gid = root;
                        fperm = 600;
                        dperm = 750;
                }
        }
}

mount {
        cpu = /mnt/cgroups/cpu;
}

Then start the cgconfig service and stat /mnt/cgroups/cpu/foo/tasks which should show the permissions mask 660 with root as an owner and users as a group. stat /mnt/cgroups/cpu/foo/ should be 750 and all files (but tasks) should have the mask 600. Note that fperm is applied on top of existing file permissions as a mask.

For more information, see the cgconfig.conf man page.

9.5 For More Information

10 Automatic Non-Uniform Memory Access (NUMA) Balancing

  • Filename: tuning_numactl.xml
  • ID: cha.tuning.numactl
Abstract

There are physical limitations to hardware that are encountered when many CPU and lots of memory are required. In this chapter, the important limitation is that there is limited communication bandwidth between the CPUs and the memory. One architecture modification that was introduced to address this is Non-Uniform Memory Access (NUMA).

In this configuration, there are multiple nodes. Each of the nodes contains a subset of all CPUs and memory. The access speed to main memory is determined by the location of the memory relative to the CPU. The performance of a workload depends on the application threads accessing data that is local to the CPU the thread is executing on. Automatic NUMA Balancing is a new feature of SLE 12. Automatic NUMA Balancing migrates data on demand to memory nodes that are local to the CPU accessing that data. Depending on the workload, this can dramatically boost performance when using NUMA hardware.

10.1 Implementation

Automatic NUMA balancing happens in three basic steps:

  1. A task scanner periodically scans a portion of a task's address space and marks the memory to force a page fault when the data is next accessed.

  2. The next access to the data will result in a NUMA Hinting Fault. Based on this fault, the data can be migrated to a memory node associated with the task accessing the memory.

  3. To keep a task, the CPU it is using and the memory it is accessing together, the scheduler groups tasks that share data.

The unmapping of data and page fault handling incurs overhead. However, commonly the overhead will be offset by threads accessing data associated with the CPU.

10.2 Configuration

Static configuration has been the recommended way of tuning workloads on NUMA hardware for some time. To do this, memory policies can be set with numactl, taskset or cpusets. NUMA-aware applications can use special APIs. In cases where the static policies have already been created, automatic NUMA balancing should be disabled as the data access should already be local.

numactl --hardware will show the memory configuration of the machine and whether it supports NUMA or not. This is example output from a 4-node machine.

tux > numactl --hardware
available: 4 nodes (0-3)
node 0 cpus: 0 4 8 12 16 20 24 28 32 36 40 44
node 0 size: 16068 MB
node 0 free: 15909 MB
node 1 cpus: 1 5 9 13 17 21 25 29 33 37 41 45
node 1 size: 16157 MB
node 1 free: 15948 MB
node 2 cpus: 2 6 10 14 18 22 26 30 34 38 42 46
node 2 size: 16157 MB
node 2 free: 15981 MB
node 3 cpus: 3 7 11 15 19 23 27 31 35 39 43 47
node 3 size: 16157 MB
node 3 free: 16028 MB
node distances:
node   0   1   2   3
  0:  10  20  20  20
  1:  20  10  20  20
  2:  20  20  10  20
  3:  20  20  20  10

Automatic NUMA balancing can be enabled or disabled for the current session by writing 1 or 0 to /proc/sys/kernel/numa_balancing which will enable or disable the feature respectively. To permanently enable or disable it, use the kernel command line option numa_balancing=[enable|disable].

If Automatic NUMA Balancing is enabled, the task scanner behavior can be configured. The task scanner balances the overhead of Automatic NUMA Balancing with the amount of time it takes to identify the best placement of data.

numa_balancing_scan_delay_ms

The amount of CPU time a thread must consume before its data is scanned. This prevents creating overhead because of short-lived processes.

numa_balancing_scan_period_min_ms and numa_balancing_scan_period_max_ms

Controls how frequently a task's data is scanned. Depending on the locality of the faults the scan rate will increase or decrease. These settings control the min and max scan rates.

numa_balancing_scan_size_mb

Controls how much address space is scanned when the task scanner is active.

10.3 Monitoring

The most important task is to assign metrics to your workload and measure the performance with Automatic NUMA Balancing enabled and disabled to measure the impact. Profiling tools can be used to monitor local and remote memory accesses if the CPU supports such monitoring. Automatic NUMA Balancing activity can be monitored via the following parameters in /proc/vmstat:

numa_pte_updates

The amount of base pages that were marked for NUMA hinting faults.

numa_huge_pte_updates

The amount of transparent huge pages that were marked for NUMA hinting faults. In combination with numa_pte_updates the total address space that was marked can be calculated.

numa_hint_faults

Records how many NUMA hinting faults were trapped.

numa_hint_faults_local

Shows how many of the hinting faults were to local nodes. In combination with numa_hint_faults, the percentage of local versus remote faults can be calculated. A high percentage of local hinting faults indicates that the workload is closer to being converged.

numa_pages_migrated

Records how many pages were migrated because they were misplaced. As migration is a copying operation, it contributes the largest part of the overhead created by NUMA balancing.

10.4 Impact

The following illustrates a simple test case of a 4-node NUMA machine running the SpecJBB 2005 using a single instance of the JVM with no static tuning around memory policies. Note, however, that the impact for each workload will vary and that this example is based on a pre-release version of SUSE Linux Enterprise Desktop 12.

            Balancing disabled      Balancing enabled
TPut 1      26629.00 (  0.00%)     26507.00 ( -0.46%)
TPut 2      55841.00 (  0.00%)     53592.00 ( -4.03%)
TPut 3      86078.00 (  0.00%)     86443.00 (  0.42%)
TPut 4     116764.00 (  0.00%)    113272.00 ( -2.99%)
TPut 5     143916.00 (  0.00%)    141581.00 ( -1.62%)
TPut 6     166854.00 (  0.00%)    166706.00 ( -0.09%)
TPut 7     195992.00 (  0.00%)    192481.00 ( -1.79%)
TPut 8     222045.00 (  0.00%)    227143.00 (  2.30%)
TPut 9     248872.00 (  0.00%)    250123.00 (  0.50%)
TPut 10    270934.00 (  0.00%)    279314.00 (  3.09%)
TPut 11    297217.00 (  0.00%)    301878.00 (  1.57%)
TPut 12    311021.00 (  0.00%)    326048.00 (  4.83%)
TPut 13    324145.00 (  0.00%)    346855.00 (  7.01%)
TPut 14    345973.00 (  0.00%)    378741.00 (  9.47%)
TPut 15    354199.00 (  0.00%)    394268.00 ( 11.31%)
TPut 16    378016.00 (  0.00%)    426782.00 ( 12.90%)
TPut 17    392553.00 (  0.00%)    437772.00 ( 11.52%)
TPut 18    396630.00 (  0.00%)    456715.00 ( 15.15%)
TPut 19    399114.00 (  0.00%)    484020.00 ( 21.27%)
TPut 20    413907.00 (  0.00%)    493618.00 ( 19.26%)
TPut 21    413173.00 (  0.00%)    510386.00 ( 23.53%)
TPut 22    420256.00 (  0.00%)    521016.00 ( 23.98%)
TPut 23    425581.00 (  0.00%)    536214.00 ( 26.00%)
TPut 24    429052.00 (  0.00%)    532469.00 ( 24.10%)
TPut 25    426127.00 (  0.00%)    526548.00 ( 23.57%)
TPut 26    422428.00 (  0.00%)    531994.00 ( 25.94%)
TPut 27    424378.00 (  0.00%)    488340.00 ( 15.07%)
TPut 28    419338.00 (  0.00%)    543016.00 ( 29.49%)
TPut 29    403347.00 (  0.00%)    529178.00 ( 31.20%)
TPut 30    408681.00 (  0.00%)    510621.00 ( 24.94%)
TPut 31    406496.00 (  0.00%)    499781.00 ( 22.95%)
TPut 32    404931.00 (  0.00%)    502313.00 ( 24.05%)
TPut 33    397353.00 (  0.00%)    522418.00 ( 31.47%)
TPut 34    382271.00 (  0.00%)    491989.00 ( 28.70%)
TPut 35    388965.00 (  0.00%)    493012.00 ( 26.75%)
TPut 36    374702.00 (  0.00%)    502677.00 ( 34.15%)
TPut 37    367578.00 (  0.00%)    500588.00 ( 36.19%)
TPut 38    367121.00 (  0.00%)    496977.00 ( 35.37%)
TPut 39    355956.00 (  0.00%)    489430.00 ( 37.50%)
TPut 40    350855.00 (  0.00%)    487802.00 ( 39.03%)
TPut 41    345001.00 (  0.00%)    468021.00 ( 35.66%)
TPut 42    336177.00 (  0.00%)    462260.00 ( 37.50%)
TPut 43    329169.00 (  0.00%)    467906.00 ( 42.15%)
TPut 44    329475.00 (  0.00%)    470784.00 ( 42.89%)
TPut 45    323845.00 (  0.00%)    450739.00 ( 39.18%)
TPut 46    323878.00 (  0.00%)    435457.00 ( 34.45%)
TPut 47    310524.00 (  0.00%)    403914.00 ( 30.07%)
TPut 48    311843.00 (  0.00%)    459017.00 ( 47.19%)

                        Balancing Disabled        Balancing Enabled
 Expctd Warehouse          48.00 (  0.00%)          48.00 (  0.00%)
 Expctd Peak Bops      310524.00 (  0.00%)      403914.00 ( 30.07%)
 Actual Warehouse          25.00 (  0.00%)          29.00 ( 16.00%)
 Actual Peak Bops      429052.00 (  0.00%)      543016.00 ( 26.56%)
 SpecJBB Bops            6364.00 (  0.00%)        9368.00 ( 47.20%)
 SpecJBB Bops/JVM        6364.00 (  0.00%)        9368.00 ( 47.20%)

Automatic NUMA Balancing simplifies tuning workloads for high performance on NUMA machines. Where possible, it is still recommended to statically tune the workload to partition it within each node. However, in all other cases, automatic NUMA balancing should boost performance.

11 Power Management

  • Filename: tuning_power.xml
  • ID: cha.tuning.power
Abstract

Power management aims at reducing operating costs for energy and cooling systems while at the same time keeping the performance of a system at a level that matches the current requirements. Thus, power management is always a matter of balancing the actual performance needs and power saving options for a system. Power management can be implemented and used at different levels of the system. A set of specifications for power management functions of devices and the operating system interface to them has been defined in the Advanced Configuration and Power Interface (ACPI). As power savings in server environments can primarily be achieved at the processor level, this chapter introduces some main concepts and highlights some tools for analyzing and influencing relevant parameters.

11.1 Power Management at CPU Level

At the CPU level, you can control power usage in various ways. For example by using idling power states (C-states), changing CPU frequency (P-states), and throttling the CPU (T-states). The following sections give a short introduction to each approach and its significance for power savings. Detailed specifications can be found at http://www.acpi.info/spec.htm.

11.1.1 C-States (Processor Operating States)

Modern processors have several power saving modes called C-states. They reflect the capability of an idle processor to turn off unused components to save power.

When a processor is in the C0 state, it is executing instructions. A processor running in any other C-state is idle. The higher the C number, the deeper the CPU sleep mode: more components are shut down to save power. Deeper sleep states can save large amounts of energy. Their downside is that they introduce latency. This means, it takes more time for the CPU to go back to C0. Depending on workload (threads waking up, triggering CPU usage and then going back to sleep again for a short period of time) and hardware (for example, interrupt activity of a network device), disabling the deepest sleep states can significantly increase overall performance. For details on how to do so, refer to Section 11.3.2, “Viewing Kernel Idle Statistics with cpupower.

Some states also have submodes with different power saving latency levels. Which C-states and submodes are supported depends on the respective processor. However, C1 is always available.

Table 11.1, “C-States” gives an overview of the most common C-states.

Table 11.1: C-States

Mode

Definition

C0

Operational state. CPU fully turned on.

C1

First idle state. Stops CPU main internal clocks via software. Bus interface unit and APIC are kept running at full speed.

C2

Stops CPU main internal clocks via hardware. State in which the processor maintains all software-visible states, but may take longer to wake up through interrupts.

C3

Stops all CPU internal clocks. The processor does not need to keep its cache coherent, but maintains other states. Some processors have variations of the C3 state that differ in how long it takes to wake the processor through interrupts.

To avoid needless power consumption, it is recommended to test your workloads with deep sleep states enabled versus deep sleep states disabled. For more information, refer to Section 11.3.2, “Viewing Kernel Idle Statistics with cpupower or the cpupower-idle-set(1) man page.

11.1.2 P-States (Processor Performance States)

While a processor operates (in C0 state), it can be in one of several CPU performance states (P-states). Whereas C-states are idle states (all but C0), P-states are operational states that relate to CPU frequency and voltage.

The higher the P-state, the lower the frequency and voltage at which the processor runs. The number of P-states is processor-specific and the implementation differs across the various types. However, P0 is always the highest-performance state (except for Section 11.1.3, “Turbo Features”). Higher P-state numbers represent slower processor speeds and lower power consumption. For example, a processor in P3 state runs more slowly and uses less power than a processor running in the P1 state. To operate at any P-state, the processor must be in the C0 state, which means that it is working and not idling. The CPU P-states are also defined in the ACPI specification, see http://www.acpi.info/spec.htm.

C-states and P-states can vary independently of one another.

11.1.3 Turbo Features

Turbo features allow to dynamically overtick active CPU cores while other cores are in deep sleep states. This increases the performance of active threads while still complying with Thermal Design Power (TDP) limits.

However, the conditions under which a CPU core can use turbo frequencies are architecture-specific. Learn how to evaluate the efficiency of those new features in Section 11.3, “The cpupower Tools”.

11.2 In-Kernel Governors

The in-kernel governors belong to the Linux kernel CPUfreq infrastructure and can be used to dynamically scale processor frequencies at runtime. You can think of the governors as a sort of preconfigured power scheme for the CPU. The CPUfreq governors use P-states to change frequencies and lower power consumption. The dynamic governors can switch between CPU frequencies, based on CPU usage, to allow for power savings while not sacrificing performance.

The following governors are available with the CPUfreq subsystem:

Performance Governor

The CPU frequency is statically set to the highest possible for maximum performance. Consequently, saving power is not the focus of this governor.

See also Section 11.5.1, “Tuning Options for P-States”.

Powersave Governor

The CPU frequency is statically set to the lowest possible. This can have severe impact on the performance, as the system will never rise above this frequency no matter how busy the processors are. An important exception is the intel_pstate which defaults to the powersave mode. This is due to a hardware-specific decision but functionally it operates similarly to the on-demand governor.

However, using this governor often does not lead to the expected power savings as the highest savings can usually be achieved at idle through entering C-states. With the powersave governor, processes run at the lowest frequency and thus take longer to finish. This means it takes longer until the system can go into an idle C-state.

Tuning options: The range of minimum frequencies available to the governor can be adjusted (for example, with the cpupower command line tool).

On-demand Governor

The kernel implementation of a dynamic CPU frequency policy: The governor monitors the processor usage. When it exceeds a certain threshold, the governor will set the frequency to the highest available. If the usage is less than the threshold, the next lowest frequency is used. If the system continues to be underemployed, the frequency is again reduced until the lowest available frequency is set.

Important
Important: Drivers and In-kernel Governors

Not all drivers use the in-kernel governors to dynamically scale power frequency at runtime. For example, the intel_pstate driver adjusts power frequency itself. Use the cpupower frequency-info command to find out which driver your system uses.

11.3 The cpupower Tools

The cpupower tools are designed to give an overview of all CPU power-related parameters that are supported on a given machine, including turbo (or boost) states. Use the tool set to view and modify settings of the kernel-related CPUfreq and cpuidle systems and other settings not related to frequency scaling or idle states. The integrated monitoring framework can access both kernel-related parameters and hardware statistics. Therefore, it is ideally suited for performance benchmarks. It also helps you to identify the dependencies between turbo and idle states.

After installing the cpupower package, view the available cpupower subcommands with cpupower --help. Access the general man page with man cpupower, and the man pages of the subcommands with man cpupower-SUBCOMMAND.

11.3.1 Viewing Current Settings with cpupower

The cpupower frequency-info command shows the statistics of the cpufreq driver used in the kernel. Additionally, it shows if turbo (boost) states are supported and enabled in the BIOS. Run without any options, it shows an output similar to the following:

Example 11.1: Example Output of cpupower frequency-info
root # cpupower frequency-info
analyzing CPU 0:
  driver: intel_pstate
  CPUs which run at the same hardware frequency: 0
  CPUs which need to have their frequency coordinated by software: 0
  maximum transition latency: 0.97 ms.
  hardware limits: 1.20 GHz - 3.80 GHz
  available cpufreq governors: performance, powersave
  current policy: frequency should be within 1.20 GHz and 3.80 GHz.
                  The governor "powersave" may decide which speed to use
                  within this range.
  current CPU frequency is 3.40 GHz (asserted by call to hardware).
  boost state support:
    Supported: yes
    Active: yes
    3500 MHz max turbo 4 active cores
    3600 MHz max turbo 3 active cores
    3600 MHz max turbo 2 active cores
    3800 MHz max turbo 1 active cores

To get the current values for all CPUs, use cpupower -c all frequency-info.

11.3.2 Viewing Kernel Idle Statistics with cpupower

The idle-info subcommand shows the statistics of the cpuidle driver used in the kernel. It works on all architectures that use the cpuidle kernel framework.

Example 11.2: Example Output of cpupower idle-info
root # cpupower idle-info
CPUidle driver: intel_idle
CPUidle governor: menu

Analyzing CPU 0:
Number of idle states: 6
Available idle states: POLL C1-SNB C1E-SNB C3-SNB C6-SNB C7-SNB
POLL:
Flags/Description: CPUIDLE CORE POLL IDLE
Latency: 0
Usage: 163128
Duration: 17585669
C1-SNB:
Flags/Description: MWAIT 0x00
Latency: 2
Usage: 16170005
Duration: 697658910
C1E-SNB:
Flags/Description: MWAIT 0x01
Latency: 10
Usage: 4421617
Duration: 757797385
C3-SNB:
Flags/Description: MWAIT 0x10
Latency: 80
Usage: 2135929
Duration: 735042875
C6-SNB:
Flags/Description: MWAIT 0x20
Latency: 104
Usage: 53268
Duration: 229366052
C7-SNB:
Flags/Description: MWAIT 0x30
Latency: 109
Usage: 62593595
Duration: 324631233978

After finding out which processor idle states are supported with cpupower idle-info, individual states can be disabled using the cpupower idle-set command. Typically one wants to disable the deepest sleep state, for example:

cpupower idle-set -d 5

Or, for disabling all CPUs with latencies equal to or higher than 80:

cpupower idle-set -D 80

11.3.3 Monitoring Kernel and Hardware Statistics with cpupower

Use the monitor subcommand to report processor topology, and monitor frequency and idle power state statistics over a certain period of time. The default interval is 1 second, but it can be changed with the -i. Independent processor sleep states and frequency counters are implemented in the tool—some retrieved from kernel statistics, others reading out hardware registers. The available monitors depend on the underlying hardware and the system. List them with cpupower monitor -l. For a description of the individual monitors, refer to the cpupower-monitor man page.

The monitor subcommand allows you to execute performance benchmarks. To compare kernel statistics with hardware statistics for specific workloads, concatenate the respective command, for example:

cpupower monitor db_test.sh
Example 11.3: Example cpupower monitor Output
root # cpupower monitor
|Mperf                   || Idle_Stats
 1                         2 
CPU | C0   | Cx   | Freq || POLL | C1   | C2   | C3
   0|  3.71| 96.29|  2833||  0.00|  0.00|  0.02| 96.32
   1| 100.0| -0.00|  2833||  0.00|  0.00|  0.00|  0.00
   2|  9.06| 90.94|  1983||  0.00|  7.69|  6.98| 76.45
   3|  7.43| 92.57|  2039||  0.00|  2.60| 12.62| 77.52

1

Mperf shows the average frequency of a CPU, including boost frequencies, over time. Additionally, it shows the percentage of time the CPU has been active (C0) or in any sleep state (Cx). As the turbo states are managed by the BIOS, it is impossible to get the frequency values at a given instant. On modern processors with turbo features the Mperf monitor is the only way to find out about the frequency a certain CPU has been running in.

2

Idle_Stats shows the statistics of the cpuidle kernel subsystem. The kernel updates these values every time an idle state is entered or left. Therefore there can be some inaccuracy when cores are in an idle state for some time when the measure starts or ends.

Apart from the (general) monitors in the example above, other architecture-specific monitors are available. For detailed information, refer to the cpupower-monitor man page.

By comparing the values of the individual monitors, you can find correlations and dependencies and evaluate how well the power saving mechanism works for a certain workload. In Example 11.3 you can see that CPU 0 is idle (the value of Cx is near 100%), but runs at a very high frequency. This is because the CPUs 0 and 1 have the same frequency values which means that there is a dependency between them.

11.3.4 Modifying Current Settings with cpupower

You can use cpupower frequency-set command as root to modify current settings. It allows you to set values for the minimum or maximum CPU frequency the governor may select or to create a new governor. With the -c option, you can also specify for which of the processors the settings should be modified. That makes it easy to use a consistent policy across all processors without adjusting the settings for each processor individually. For more details and the available options, refer to the cpupower-freqency-set man page or run cpupower frequency-set --help.

11.4 Monitoring Power Consumption with powerTOP

You can monitor system power consumption with powerTOP. It helps you to identify the reasons for unnecessary high power consumption (for example, processes that are mainly responsible for waking up a processor from its idle state) and to optimize your system settings to avoid these. It supports both Intel and AMD processors. The powertop package is available from the SUSE Linux Enterprise SDK.

The SDK is a module for SUSE Linux Enterprise and is available via an online channel from the SUSE Customer Center. Alternatively, go to http://download.suse.com/, search for SUSE Linux Enterprise Software Development Kit and download it from there. Refer to Chapter 11, Installing Modules, Extensions, and Third Party Add-On Products for details.

powerTOP combines various sources of information (analysis of programs, device drivers, kernel options, amounts and sources of interrupts waking up processors from sleep states) and shows them in one screen. Example 11.4, “Example powerTOP Output” shows which information categories are available:

Example 11.4: Example powerTOP Output
Cn               Avg  residency       P-states   (frequencies)
1                 2      3              4            5
C0 (cpu running)        (11.6%)       2.00 Ghz       0.1%
polling         0.0ms   ( 0.0%)       2.00 Ghz       0.0%
C1              4.4ms   (57.3%)       1.87 Ghz       0.0%
C2             10.0ms   (31.1%)       1064 Mhz      99.9%


Wakeups-from-idle per second : 11.2     interval: 5.0s 6
no ACPI power usage estimate available 7


Top causes for wakeups: 8
96.2% (826.0)       <interrupt> : extra timer interrupt
 0.9% (  8.0)     <kernel core> : usb_hcd_poll_rh_status (rh_timer_func)
 0.3% (  2.4)       <interrupt> : megasas
 0.2% (  2.0)     <kernel core> : clocksource_watchdog (clocksource_watchdog)
 0.2% (  1.6)       <interrupt> : eth1-TxRx-0
 0.1% (  1.0)       <interrupt> : eth1-TxRx-4

[...]

Suggestion: 9 Enable SATA ALPM link power management via:
echo min_power > /sys/class/scsi_host/host0/link_power_management_policy
or press the S key.

1

The column shows the C-states. When working, the CPU is in state 0, when resting it is in some state greater than 0, depending on which C-states are available and how deep the CPU is sleeping.

2

The column shows average time in milliseconds spent in the particular C-state.

3

The column shows the percentages of time spent in various C-states. For considerable power savings during idle, the CPU should be in deeper C-states most of the time. In addition, the longer the average time spent in these C-states, the more power is saved.

4

The column shows the frequencies the processor and kernel driver support on your system.

5

The column shows the amount of time the CPU cores stayed in different frequencies during the measuring period.

6

Shows how often the CPU is awoken per second (number of interrupts). The lower the number, the better. The interval value is the powerTOP refresh interval which can be controlled with the -t option. The default time to gather data is 5 seconds.

7

When running powerTOP on a laptop, this line displays the ACPI information on how much power is currently being used and the estimated time until discharge of the battery. On servers, this information is not available.

8

Shows what is causing the system to be more active than needed. powerTOP displays the top items causing your CPU to awake during the sampling period.

9

Suggestions on how to improve power usage for this machine.

For more information, refer to the powerTOP project page at https://01.org/powertop.

11.5 Special Tuning Options

The following sections highlight important settings.

11.5.1 Tuning Options for P-States

The CPUfreq subsystem offers several tuning options for P-states: You can switch between the different governors, influence minimum or maximum CPU frequency to be used or change individual governor parameters.

To switch to another governor at runtime, use cpupower frequency-set with the -g option. For example, running the following command (as root) will activate the performance governor:

cpupower frequency-set -g performance

To set values for the minimum or maximum CPU frequency the governor may select, use the -d or -u option, respectively.

11.6 Troubleshooting

BIOS options enabled?

To use C-states or P-states, check your BIOS options:

  • To use C-states, make sure to enable CPU C State or similar options to benefit from power savings at idle.

  • To use P-states and the CPUfreq governors, make sure to enable Processor Performance States options or similar.

  • Even if P-states and C-states are available, it is possible that the platform firmware is managing CPU frequencies which may be sub-optimal. For example, if pcc-cpufreq is loaded then the OS is only giving hints to the firmware, which is free to ignore the hints. This can be addressed by selecting "OS Management" or similar for CPU frequency managed in the BIOS. After reboot, an alternative driver will be used but the performance impact should be carefully measured.

In case of a CPU upgrade, make sure to upgrade your BIOS, too. The BIOS needs to know the new CPU and its frequency stepping to pass this information on to the operating system.

Log file information?

Check the systemd journal (see Chapter 16, journalctl: Query the systemd Journal) for any output regarding the CPUfreq subsystem. Only severe errors are reported there.

If you suspect problems with the CPUfreq subsystem on your machine, you can also enable additional debug output. To do so, either use cpufreq.debug=7 as boot parameter or execute the following command as root:

echo 7 > /sys/module/cpufreq/parameters/debug

This will cause CPUfreq to log more information to dmesg on state transitions, which is useful for diagnosis. But as this additional output of kernel messages can be rather comprehensive, use it only if you are fairly sure that a problem exists.

11.7 For More Information

Platforms with a Baseboard Management Controller (BMC) may have additional power management configuration options accessible via the service processor. These configurations are vendor specific and therefore not subject of this guide. For more information, refer to the manuals provided by your vendor.

For more information about powerTOP, refer to https://01.org/powertop.

Part V Kernel Tuning

12 Tuning I/O Performance

I/O scheduling controls how input/output operations will be submitted to storage. SUSE Linux Enterprise Desktop offers various I/O algorithms—called elevators—suiting different workloads. Elevators can help to reduce seek operations and can prioritize I/O requests.

13 Tuning the Task Scheduler

Modern operating systems, such as SUSE® Linux Enterprise Desktop, normally run many tasks at the same time. For example, you can be searching in a text file while receiving an e-mail and copying a big file to an external hard disk. These simple tasks require many additional processes to be run by th…

14 Tuning the Memory Management Subsystem

To understand and tune the memory management behavior of the kernel, it is important to first have an overview of how it works and cooperates with other subsystems.

15 Tuning the Network

The network subsystem is complex and its tuning highly depends on the system use scenario and on external factors such as software clients or hardware components (switches, routers, or gateways) in your network. The Linux kernel aims more at reliability and low latency than low overhead and high thr…

12 Tuning I/O Performance

  • Filename: tuning_storagescheduler.xml
  • ID: cha.tuning.io

I/O scheduling controls how input/output operations will be submitted to storage. SUSE Linux Enterprise Desktop offers various I/O algorithms—called elevators—suiting different workloads. Elevators can help to reduce seek operations and can prioritize I/O requests.

Choosing the best suited I/O elevator not only depends on the workload, but on the hardware, too. Single ATA disk systems, SSDs, RAID arrays, or network storage systems, for example, each require different tuning strategies.

12.1 Switching I/O Scheduling

SUSE Linux Enterprise Desktop picks a default I/O scheduler at boot-time, which can be changed on the fly per block device. This makes it possible to set different algorithms, for example, for the device hosting the system partition and the device hosting a database.

The default I/O scheduler is chosen for each device based on whether the device reports to be rotational disk or not. For non-rotational disks DEADLINE I/O scheduler is picked. Other devices default to CFQ (Completely Fair Queuing). To change this default, use the following boot parameter:

elevator=SCHEDULER

Replace SCHEDULER with one of the values cfq, noop, or deadline. See Section 12.2, “Available I/O Elevators” for details.

To change the elevator for a specific device in the running system, run the following command:

echo SCHEDULER > /sys/block/DEVICE/queue/scheduler

Here, SCHEDULER is one of cfq, noop, or deadline. DEVICE is the block device (sda for example). Note that this change will not persist during reboot. For permanent I/O scheduler change for a particular device either place the command switching the I/O scheduler into init scripts or add appropriate udev rule into /lib/udev/rules.d/. See /lib/udev/rules.d/60-ssd-scheduler.rules for an example of such tuning.

12.2 Available I/O Elevators

In the following elevators available on SUSE Linux Enterprise Desktop are listed. Each elevator has a set of tunable parameters, which can be set with the following command:

echo VALUE > /sys/block/DEVICE/queue/iosched/TUNABLE

where VALUE is the desired value for the TUNABLE and DEVICE the block device.

To find out which elevator is the current default, run the following command. The currently selected scheduler is listed in brackets:

jupiter:~ # cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

This file can also contain the string none meaning that I/O scheduling does not happen for this device. This is usually because the device uses multi-queue queueing mechanism (refer to Section 12.4, “Enable blk-mq I/O Path for SCSI by Default”).

12.2.1 CFQ (Completely Fair Queuing)

CFQ is a fairness-oriented scheduler and is used by default on SUSE Linux Enterprise Desktop. The algorithm assigns each thread a time slice in which it is allowed to submit I/O to disk. This way each thread gets a fair share of I/O throughput. It also allows assigning tasks I/O priorities which are taken into account during scheduling decisions (see Section 8.3.3, “Prioritizing Disk Access with ionice). The CFQ scheduler has the following tunable parameters:

/sys/block/DEVICE/queue/iosched/slice_idle_us

When a task has no more I/O to submit in its time slice, the I/O scheduler waits for a while before scheduling the next thread. The slice_idle_us is the time in microseconds the I/O scheduler waits. File slice_idle controls the same tunable but in millisecond units. Waiting for more I/O from a thread can improve locality of I/O. Additionally, it avoids starving processes doing dependent I/O. A process does dependent I/O if it needs a result of one I/O to submit another I/O. For example, if you first need to read an index block to find out a data block to read, these two reads form a dependent I/O.

For media where locality does not play a big role (SSDs, SANs with lots of disks) setting /sys/block/<device>/queue/iosched/slice_idle_us to 0 can improve the throughput considerably.

/sys/block/DEVICE/queue/iosched/quantum

This option limits the maximum number of requests that are being processed at once by the device. The default value is 4. For a storage with several disks, this setting can unnecessarily limit parallel processing of requests. Therefore, increasing the value can improve performance. However, it can also cause latency of certain I/O operations to increase because more requests are buffered inside the storage. When changing this value, you can also consider tuning /sys/block/DEVICE/queue/iosched/slice_async_rq (the default value is 2). This limits the maximum number of asynchronous requests—usually write requests—that are submitted in one time slice.

/sys/block/DEVICE/queue/iosched/low_latency

When enabled (which is the default on SUSE Linux Enterprise Desktop) the scheduler may dynamically adjust the length of the time slice by aiming to meet a tuning parameter called the target_latency. Time slices are recomputed to meet this target_latency and ensure that processes get fair access within a bounded length of time.

/sys/block/DEVICE/queue/iosched/target_latency

Contains an estimated latency time for the CFQ. CFQ will use it to calculate the time slice used for every task.

/sys/block/DEVICE/queue/iosched/group_idle_us

To avoid starving of blkio cgroups doing dependent I/O, CFQ waits a bit after completion of I/O for one blkio cgroup before scheduling I/O for a different blkio cgroup. When slice_idle_us is set, this parameter does not have a big impact. However, for fast media, the overhead of slice_idle_us is generally undesirable. Disabling slice_idle_us and setting group_idle_us is a method to avoid starvation of blkio cgroups doing dependent I/O with lower overhead. Note that the file group_idle controls the same tunable however with millisecond granularity.

Example 12.1: Increasing individual thread throughput using CFQ

In SUSE Linux Enterprise Desktop 12 SP3, the low_latency tuning parameter is enabled by default to ensure that processes get fair access within a bounded length of time. (Note that this parameter was not enabled in versions prior to SUSE Linux Enterprise 12.)

This is usually preferred in a server scenario where processes are executing I/O as part of transactions, as it makes the time needed for each transaction predictable. However, there are scenarios where that is not the desired behavior:

  • If the performance metric of interest is the peak performance of a single process when there is I/O contention.

  • If a workload must complete as quickly as possible and there are multiple sources of I/O. In this case, unfair treatment from the I/O scheduler may allow the transactions to complete faster: Processes take their full slice and exit quickly, resulting in reduced overall contention.

To address this, there are two options—increase target_latency or disable low_latency. As with all tuning parameters it is important to verify your workload behaves as expected before and after the tuning modification. Take careful note of whether your workload depends on individual process peak performance or scales better with fairness. It should also be noted that the performance will depend on the underlying storage and the correct tuning option for one installation may not be universally true.

Find below an example that does not control when I/O starts but is simple enough to demonstrate the point. 32 processes are writing a small amount of data to disk in parallel. Using the SUSE Linux Enterprise Desktop default (enabling low_latency), the result looks as follows:

root # echo 1 > /sys/block/sda/queue/iosched/low_latency
root # time ./dd-test.sh
10485760 bytes (10 MB) copied, 2.62464 s, 4.0 MB/s
10485760 bytes (10 MB) copied, 3.29624 s, 3.2 MB/s
10485760 bytes (10 MB) copied, 3.56341 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.56908 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.53043 s, 3.0 MB/s
10485760 bytes (10 MB) copied, 3.57511 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.53672 s, 3.0 MB/s
10485760 bytes (10 MB) copied, 3.5433 s, 3.0 MB/s
10485760 bytes (10 MB) copied, 3.65474 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.63694 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.90122 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 3.88507 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 3.86135 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 3.84553 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 3.88871 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 3.94943 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 4.12731 s, 2.5 MB/s
10485760 bytes (10 MB) copied, 4.15106 s, 2.5 MB/s
10485760 bytes (10 MB) copied, 4.21601 s, 2.5 MB/s
10485760 bytes (10 MB) copied, 4.35004 s, 2.4 MB/s
10485760 bytes (10 MB) copied, 4.33387 s, 2.4 MB/s
10485760 bytes (10 MB) copied, 4.55434 s, 2.3 MB/s
10485760 bytes (10 MB) copied, 4.52283 s, 2.3 MB/s
10485760 bytes (10 MB) copied, 4.52682 s, 2.3 MB/s
10485760 bytes (10 MB) copied, 4.56176 s, 2.3 MB/s
10485760 bytes (10 MB) copied, 4.62727 s, 2.3 MB/s
10485760 bytes (10 MB) copied, 4.78958 s, 2.2 MB/s
10485760 bytes (10 MB) copied, 4.79772 s, 2.2 MB/s
10485760 bytes (10 MB) copied, 4.78004 s, 2.2 MB/s
10485760 bytes (10 MB) copied, 4.77994 s, 2.2 MB/s
10485760 bytes (10 MB) copied, 4.86114 s, 2.2 MB/s
10485760 bytes (10 MB) copied, 4.88062 s, 2.1 MB/s

real    0m4.978s
user    0m0.112s
sys     0m1.544s

Note that each process completes in similar times. This is the CFQ scheduler meeting its target_latency: Each process has fair access to storage.

Note that the earlier processes complete somewhat faster. This happens because the start time of the processes is not identical. In a more complicated example, it is possible to control for this.

This is what happens when low_latency is disabled:

root # echo 0 > /sys/block/sda/queue/iosched/low_latency
root # time ./dd-test.sh
10485760 bytes (10 MB) copied, 0.813519 s, 12.9 MB/s
10485760 bytes (10 MB) copied, 0.788106 s, 13.3 MB/s
10485760 bytes (10 MB) copied, 0.800404 s, 13.1 MB/s
10485760 bytes (10 MB) copied, 0.816398 s, 12.8 MB/s
10485760 bytes (10 MB) copied, 0.959087 s, 10.9 MB/s
10485760 bytes (10 MB) copied, 1.09563 s, 9.6 MB/s
10485760 bytes (10 MB) copied, 1.18716 s, 8.8 MB/s
10485760 bytes (10 MB) copied, 1.27661 s, 8.2 MB/s
10485760 bytes (10 MB) copied, 1.46312 s, 7.2 MB/s
10485760 bytes (10 MB) copied, 1.55489 s, 6.7 MB/s
10485760 bytes (10 MB) copied, 1.64277 s, 6.4 MB/s
10485760 bytes (10 MB) copied, 1.78196 s, 5.9 MB/s
10485760 bytes (10 MB) copied, 1.87496 s, 5.6 MB/s
10485760 bytes (10 MB) copied, 1.9461 s, 5.4 MB/s
10485760 bytes (10 MB) copied, 2.08351 s, 5.0 MB/s
10485760 bytes (10 MB) copied, 2.28003 s, 4.6 MB/s
10485760 bytes (10 MB) copied, 2.42979 s, 4.3 MB/s
10485760 bytes (10 MB) copied, 2.54564 s, 4.1 MB/s
10485760 bytes (10 MB) copied, 2.6411 s, 4.0 MB/s
10485760 bytes (10 MB) copied, 2.75171 s, 3.8 MB/s
10485760 bytes (10 MB) copied, 2.86162 s, 3.7 MB/s
10485760 bytes (10 MB) copied, 2.98453 s, 3.5 MB/s
10485760 bytes (10 MB) copied, 3.13723 s, 3.3 MB/s
10485760 bytes (10 MB) copied, 3.36399 s, 3.1 MB/s
10485760 bytes (10 MB) copied, 3.60018 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.58151 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.67385 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.69471 s, 2.8 MB/s
10485760 bytes (10 MB) copied, 3.66658 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.81495 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 4.10172 s, 2.6 MB/s
10485760 bytes (10 MB) copied, 4.0966 s, 2.6 MB/s

real    0m3.505s
user    0m0.160s
sys     0m1.516s

Note that the time processes take to complete is spread much wider as processes are not getting fair access. Some processes complete faster and exit, allowing the total workload to complete faster, and some processes measure higher apparent I/O performance. It is also important to note that this example may not behave similarly on all systems as the results depend on the resources of the machine and the underlying storage.

It is important to emphasize that neither tuning option is inherently better than the other. Both are best in different circumstances and it is important to understand the requirements of your workload and tune accordingly.

12.2.2 NOOP

A trivial scheduler that only passes down the I/O that comes to it. Useful for checking whether complex I/O scheduling decisions of other schedulers are causing I/O performance regressions.

This scheduler is recommended for setups with devices that do I/O scheduling themselves, such as intelligent storage or in multipathing environments. If you choose a more complicated scheduler on the host, the scheduler of the host and the scheduler of the storage device compete with each other. This can decrease performance. The storage device can usually determine best how to schedule I/O.

For similar reasons, this scheduler is also recommended for use within virtual machines.

The NOOP scheduler can be useful for devices that do not depend on mechanical movement, like SSDs. Usually, the DEADLINE I/O scheduler is a better choice for these devices. However, NOOP creates less overhead and thus can on certain workloads increase performance.

12.2.3 DEADLINE

DEADLINE is a latency-oriented I/O scheduler. Each I/O request is assigned a deadline. Usually, requests are stored in queues (read and write) sorted by sector numbers. The DEADLINE algorithm maintains two additional queues (read and write) in which requests are sorted by deadline. As long as no request has timed out, the sector queue is used. When timeouts occur, requests from the deadline queue are served until there are no more expired requests. Generally, the algorithm prefers reads over writes.

This scheduler can provide a superior throughput over the CFQ I/O scheduler in cases where several threads read and write and fairness is not an issue. For example, for several parallel readers from a SAN and for databases (especially when using TCQ disks). The DEADLINE scheduler has the following tunable parameters:

/sys/block/<device>/queue/iosched/writes_starved

Controls how many reads can be sent to disk before it is possible to send writes. A value of 3 means, that three read operations are carried out for one write operation.

/sys/block/<device>/queue/iosched/read_expire

Sets the deadline (current time plus the read_expire value) for read operations in milliseconds. The default is 500.

/sys/block/<device>/queue/iosched/write_expire

/sys/block/<device>/queue/iosched/read_expire Sets the deadline (current time plus the read_expire value) for read operations in milliseconds. The default is 500.

12.3 I/O Barrier Tuning

Most file systems (such as XFS, Ext3, Ext4, or reiserfs) send write barriers to disk after fsync or during transaction commits. Write barriers enforce proper ordering of writes, making volatile disk write caches safe to use (at some performance penalty). If your disks are battery-backed in one way or another, disabling barriers can safely improve performance.

Sending write barriers can be disabled using the nobarrier mount option.

Warning
Warning: Disabling Barriers Can Lead to Data Loss

Disabling barriers when disks cannot guarantee caches are properly written in case of power failure can lead to severe file system corruption and data loss.

12.4 Enable blk-mq I/O Path for SCSI by Default

Block multiqueue (blk-mq) is a multi-queue block I/O queueing mechanism. Blk-mq uses per-cpu software queues to queue I/O requests. The software queues are mapped to one or more hardware submission queues. Blk-mq significantly reduces lock contention. In particular blk-mq improves performance for devices that support a high number of input/output operations per second (IOPS). Blk-mq is already the default for some devices, for example, NVM Express devices.

Currently blk-mq has no I/O scheduling support (no CFQ, no deadline I/O scheduler). This lack of I/O scheduling can cause significant performance degradation when spinning disks are used. Therefore blk-mq is not enabled by default for SCSI devices.

If you have fast SCSI devices (for example, SSDs) instead of SCSI hard disks attached to your system, consider switching to blk-mq for SCSI. This is done using the kernel command line option scsi_mod.use_blk_mq=1.

13 Tuning the Task Scheduler

  • Filename: tuning_taskscheduler.xml
  • ID: cha.tuning.taskscheduler

Modern operating systems, such as SUSE® Linux Enterprise Desktop, normally run many tasks at the same time. For example, you can be searching in a text file while receiving an e-mail and copying a big file to an external hard disk. These simple tasks require many additional processes to be run by the system. To provide each task with its required system resources, the Linux kernel needs a tool to distribute available system resources to individual tasks. And this is exactly what the task scheduler does.

The following sections explain the most important terms related to a process scheduling. They also introduce information about the task scheduler policy, scheduling algorithm, description of the task scheduler used by SUSE Linux Enterprise Desktop, and references to other sources of relevant information.

13.1 Introduction

The Linux kernel controls the way that tasks (or processes) are managed on the system. The task scheduler, sometimes called process scheduler, is the part of the kernel that decides which task to run next. It is responsible for best using system resources to guarantee that multiple tasks are being executed simultaneously. This makes it a core component of any multitasking operating system.

13.1.1 Preemption

The theory behind task scheduling is very simple. If there are runnable processes in a system, at least one process must always be running. If there are more runnable processes than processors in a system, not all the processes can be running all the time.

Therefore, some processes need to be stopped temporarily, or suspended, so that others can be running again. The scheduler decides what process in the queue will run next.

As already mentioned, Linux, like all other Unix variants, is a multitasking operating system. That means that several tasks can be running at the same time. Linux provides a so called preemptive multitasking, where the scheduler decides when a process is suspended. This forced suspension is called preemption. All Unix flavors have been providing preemptive multitasking since the beginning.

13.1.2 Timeslice

The time period for which a process will be running before it is preempted is defined in advance. It is called a timeslice of a process and represents the amount of processor time that is provided to each process. By assigning timeslices, the scheduler makes global decisions for the running system, and prevents individual processes from dominating over the processor resources.

13.1.3 Process Priority

The scheduler evaluates processes based on their priority. To calculate the current priority of a process, the task scheduler uses complex algorithms. As a result, each process is given a value according to which it is allowed to run on a processor.

13.2 Process Classification

Processes are usually classified according to their purpose and behavior. Although the borderline is not always clearly distinct, generally two criteria are used to sort them. These criteria are independent and do not exclude each other.

One approach is to classify a process either I/O-bound or processor-bound.

I/O-bound

I/O stands for Input/Output devices, such as keyboards, mice, or optical and hard disks. I/O-bound processes spend the majority of time submitting and waiting for requests. They are run very frequently, but for short time intervals, not to block other processes waiting for I/O requests.

processor-bound

On the other hand, processor-bound tasks use their time to execute a code, and usually run until they are preempted by the scheduler. They do not block processes waiting for I/O requests, and, therefore, can be run less frequently but for longer time intervals.

Another approach is to divide processes by type into interactive, batch, and real-time processes.

  • Interactive processes spend a lot of time waiting for I/O requests, such as keyboard or mouse operations. The scheduler must wake up such processes quickly on user request, or the user will find the environment unresponsive. The typical delay is approximately 100 ms. Office applications, text editors or image manipulation programs represent typical interactive processes.

  • Batch processes often run in the background and do not need to be responsive. They usually receive lower priority from the scheduler. Multimedia converters, database search engines, or log files analyzers are typical examples of batch processes.

  • Real-time processes must never be blocked by low-priority processes, and the scheduler guarantees a short response time to them. Applications for editing multimedia content are a good example here.

13.3 Completely Fair Scheduler

Since the Linux kernel version 2.6.23, a new approach has been taken to the scheduling of runnable processes. Completely Fair Scheduler (CFS) became the default Linux kernel scheduler. Since then, important changes and improvements have been made. The information in this chapter applies to SUSE Linux Enterprise Desktop with kernel version 2.6.32 and higher (including 3.x kernels). The scheduler environment was divided into several parts, and three main new features were introduced:

Modular Scheduler Core

The core of the scheduler was enhanced with scheduling classes. These classes are modular and represent scheduling policies.

Completely Fair Scheduler

Introduced in kernel 2.6.23 and extended in 2.6.24, CFS tries to assure that each process obtains its fair share of the processor time.

Group Scheduling

For example, if you split processes into groups according to which user is running them, CFS tries to provide each of these groups with the same amount of processor time.

As a result, CFS brings optimized scheduling for both servers and desktops.

13.3.1 How CFS Works

CFS tries to guarantee a fair approach to each runnable task. To find the most balanced way of task scheduling, it uses the concept of red-black tree. A red-black tree is a type of self-balancing data search tree which provides inserting and removing entries in a reasonable way so that it remains well balanced. For more information, see the wiki pages of Red-black tree.

When CFS schedules a task it accumulates virtual runtime or vruntime. The next task picked to run is always the task with the minimum accumulated vruntime so far. By balancing the red-black tree when tasks are inserted into the run queue (a planned time line of processes to be executed next), the task with the minimum vruntime is always the first entry in the red-black tree.

The amount of vruntime a task accrues is related to its priority. High priority tasks gain vruntime at a slower rate than low priority tasks, which results in high priority tasks being picked to run on the processor more often.

13.3.2 Grouping Processes

Since the Linux kernel version 2.6.24, CFS can be tuned to be fair to groups rather than to tasks only. Runnable tasks are then grouped to form entities, and CFS tries to be fair to these entities instead of individual runnable tasks. The scheduler also tries to be fair to individual tasks within these entities.

The kernel scheduler lets you group runnable tasks using control groups. For more information, see Chapter 9, Kernel Control Groups.

13.3.3 Kernel Configuration Options

Basic aspects of the task scheduler behavior can be set through the kernel configuration options. Setting these options is part of the kernel compilation process. Because kernel compilation process is a complex task and out of this document's scope, refer to relevant source of information.

Warning
Warning: Kernel Compilation

If you run SUSE Linux Enterprise Desktop on a kernel that was not shipped with it, for example on a self-compiled kernel, you lose the entire support entitlement.

13.3.4 Terminology

Documents regarding task scheduling policy often use several technical terms which you need to know to understand the information correctly. Here are some:

Latency

Delay between the time a process is scheduled to run and the actual process execution.

Granularity

The relation between granularity and latency can be expressed by the following equation:

gran = ( lat / rtasks ) - ( lat / rtasks / rtasks )

where gran stands for granularity, lat stand for latency, and rtasks is the number of running tasks.

13.3.4.1 Scheduling Policies

The Linux kernel supports the following scheduling policies:

SCHED_FIFO

Scheduling policy designed for special time-critical applications. It uses the First In-First Out scheduling algorithm.

SCHED_BATCH

Scheduling policy designed for CPU-intensive tasks.

SCHED_IDLE

Scheduling policy intended for very low prioritized tasks.

SCHED_OTHER

Default Linux time-sharing scheduling policy used by the majority of processes.

SCHED_RR

Similar to SCHED_FIFO, but uses the Round Robin scheduling algorithm.

13.3.5 Changing Real-time Attributes of Processes with chrt

The chrt command sets or retrieves the real-time scheduling attributes of a running process, or runs a command with the specified attributes. You can get or retrieve both the scheduling policy and priority of a process.

In the following examples, a process whose PID is 16244 is used.

To retrieve the real-time attributes of an existing task:

root # chrt -p 16244
pid 16244's current scheduling policy: SCHED_OTHER
pid 16244's current scheduling priority: 0

Before setting a new scheduling policy on the process, you need to find out the minimum and maximum valid priorities for each scheduling algorithm:

root # chrt -m
SCHED_SCHED_OTHER min/max priority : 0/0
SCHED_SCHED_FIFO min/max priority : 1/99
SCHED_SCHED_RR min/max priority : 1/99
SCHED_SCHED_BATCH min/max priority : 0/0
SCHED_SCHED_IDLE min/max priority : 0/0

In the above example, SCHED_OTHER, SCHED_BATCH, SCHED_IDLE polices only allow for priority 0, while that of SCHED_FIFO and SCHED_RR can range from 1 to 99.

To set SCHED_BATCH scheduling policy:

root # chrt -b -p 0 16244
pid 16244's current scheduling policy: SCHED_BATCH
pid 16244's current scheduling priority: 0

For more information on chrt, see its man page (man 1 chrt).

13.3.6 Runtime Tuning with sysctl

The sysctl interface for examining and changing kernel parameters at runtime introduces important variables by means of which you can change the default behavior of the task scheduler. The syntax of the sysctl is simple, and all the following commands must be entered on the command line as root.

To read a value from a kernel variable, enter

sysctl VARIABLE

To assign a value, enter

sysctl VARIABLE=VALUE

To get a list of all scheduler related sysctl variables, enter

sysctl -A | grep "sched" | grep -v"domain"
root # sysctl -A | grep "sched" | grep -v "domain"
kernel.sched_cfs_bandwidth_slice_us = 5000
kernel.sched_child_runs_first = 0
kernel.sched_compat_yield = 0
kernel.sched_latency_ns = 24000000
kernel.sched_migration_cost_ns = 500000
kernel.sched_min_granularity_ns = 8000000
kernel.sched_nr_migrate = 32
kernel.sched_rr_timeslice_ms = 25
kernel.sched_rt_period_us = 1000000
kernel.sched_rt_runtime_us = 950000
kernel.sched_schedstats = 0
kernel.sched_shares_window_ns = 10000000
kernel.sched_time_avg_ms = 1000
kernel.sched_tunable_scaling = 1
kernel.sched_wakeup_granularity_ns = 10000000

Note that variables ending with _ns and _us accept values in nanoseconds and microseconds, respectively.

A list of the most important task scheduler sysctl tuning variables (located at /proc/sys/kernel/) with a short description follows:

sched_cfs_bandwidth_slice_us

When CFS bandwidth control is in use, this parameter controls the amount of run-time (bandwidth) transferred to a run queue from the task's control group bandwidth pool. Small values allow the global bandwidth to be shared in a fine-grained manner among tasks, larger values reduce transfer overhead. See https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt.

sched_child_runs_first

A freshly forked child runs before the parent continues execution. Setting this parameter to 1 is beneficial for an application in which the child performs an execution after fork. For example make -j<NO_CPUS> performs better when sched_child_runs_first is turned off. The default value is 0.

sched_compat_yield

Enables the aggressive CPU yielding behavior of the old O(1) scheduler by moving the relinquishing task to the end of the runnable queue (right-most position in the red-black tree). Applications that depend on the sched_yield(2) syscall behavior may see performance improvements by giving other processes a chance to run when there are highly contended resources (such as locks). On the other hand, given that this call occurs in context switching, misusing the call can hurt the workload. Only use it when you see a drop in performance. The default value is 0.

sched_migration_cost_ns

Amount of time after the last execution that a task is considered to be cache hot in migration decisions. A hot task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations. The default value is 500000 (ns).

If the CPU idle time is higher than expected when there are runnable processes, try reducing this value. If tasks bounce between CPUs or nodes too often, try increasing it.

sched_latency_ns

Targeted preemption latency for CPU bound tasks. Increasing this variable increases a CPU bound task's timeslice. A task's timeslice is its weighted fair share of the scheduling period:

timeslice = scheduling period * (task's weight/total weight of tasks in the run queue)

The task's weight depends on the task's nice level and the scheduling policy. Minimum task weight for a SCHED_OTHER task is 15, corresponding to nice 19. The maximum task weight is 88761, corresponding to nice -20.

Timeslices become smaller as the load increases. When the number of runnable tasks exceeds sched_latency_ns/sched_min_granularity_ns, the slice becomes number_of_running_tasks * sched_min_granularity_ns. Prior to that, the slice is equal to sched_latency_ns.

This value also specifies the maximum amount of time during which a sleeping task is considered to be running for entitlement calculations. Increasing this variable increases the amount of time a waking task may consume before being preempted, thus increasing scheduler latency for CPU bound tasks. The default value is 6000000 (ns).

sched_min_granularity_ns

Minimal preemption granularity for CPU bound tasks. See sched_latency_ns for details. The default value is 4000000 (ns).

sched_wakeup_granularity_ns

The wake-up preemption granularity. Increasing this variable reduces wake-up preemption, reducing disturbance of compute bound tasks. Lowering it improves wake-up latency and throughput for latency critical tasks, particularly when a short duty cycle load component must compete with CPU bound components. The default value is 2500000 (ns).

Warning
Warning: Setting the Right Wake-up Granularity Value

Settings larger than half of sched_latency_ns will result in no wake-up preemption. Short duty cycle tasks will be unable to compete with CPU hogs effectively.

sched_rr_timeslice_ms

Quantum that SCHED_RR tasks are allowed to run before they are preempted and put to the end of the task list.

sched_rt_period_us

Period over which real-time task bandwidth enforcement is measured. The default value is 1000000 (µs).

sched_rt_runtime_us

Quantum allocated to real-time tasks during sched_rt_period_us. Setting to -1 disables RT bandwidth enforcement. By default, RT tasks may consume 95%CPU/sec, thus leaving 5%CPU/sec or 0.05s to be used by SCHED_OTHER tasks. The default value is 950000 (µs).

sched_nr_migrate

Controls how many tasks can be migrated across processors for load-balancing purposes. Because balancing iterates the runqueue with interrupts disabled (softirq), it can incur in irq-latency penalties for real-time tasks. Therefore increasing this value may give a performance boost to large SCHED_OTHER threads at the expense of increased irq-latencies for real-time tasks. The default value is 32.

sched_time_avg_ms

This parameter sets the period over which the time spent running real-time tasks is averaged. That average assists CFS in making load-balancing decisions and gives an indication of how busy a CPU is with high-priority real-time tasks.

The optimal setting for this parameter is highly workload dependent and depends, among other things, on how frequently real-time tasks are running and for how long.

13.3.7 Debugging Interface and Scheduler Statistics

CFS comes with a new improved debugging interface, and provides runtime statistics information. Relevant files were added to the /proc file system, which can be examined simply with the cat or less command. A list of the related /proc files follows with their short description:

/proc/sched_debug

Contains the current values of all tunable variables (see Section 13.3.6, “Runtime Tuning with sysctl) that affect the task scheduler behavior, CFS statistics, and information about the run queues (CFS, RT and deadline) on all available processors. A summary of the task running on each processor is also shown, with the task name and PID, along with scheduler specific statistics. The first being tree-key column, it indicates the task's virtual runtime, and its name comes from the kernel sorting all runnable tasks by this key in a red-black tree. The switches column indicates the total number of switches (involuntary or not), and naturally the prio refers to the process priority. The wait-time value indicates the amount of time the task waited to be scheduled. Finally both sum-exec and sum-sleep account for the total amount of time (in nanoseconds) the task was running on the processor or asleep, respectively.

root # cat /proc/sched_debug
Sched Debug Version: v0.11, 4.4.21-64-default #1
ktime                                   : 23533900.395978
sched_clk                               : 23543587.726648
cpu_clk                                 : 23533900.396165
jiffies                                 : 4300775771
sched_clock_stable                      : 0

sysctl_sched
  .sysctl_sched_latency                    : 6.000000
  .sysctl_sched_min_granularity            : 2.000000
  .sysctl_sched_wakeup_granularity         : 2.500000
  .sysctl_sched_child_runs_first           : 0
  .sysctl_sched_features                   : 154871
  .sysctl_sched_tunable_scaling            : 1 (logaritmic)

cpu#0, 2666.762 MHz
  .nr_running                    : 1
  .load                          : 1024
  .nr_switches                   : 1918946
[...]

cfs_rq[0]:/
  .exec_clock                    : 170176.383770
  .MIN_vruntime                  : 0.000001
  .min_vruntime                  : 347375.854324
  .max_vruntime                  : 0.000001
[...]

rt_rq[0]:/
  .rt_nr_running                 : 0
  .rt_throttled                  : 0
  .rt_time                       : 0.000000
  .rt_runtime                    : 950.000000

dl_rq[0]:
  .dl_nr_running                 : 0

  task   PID         tree-key  switches  prio     wait-time        [...]
------------------------------------------------------------------------
R  cc1 63477     98876.717832       197   120      0.000000         ...
/proc/schedstat

Displays statistics relevant to the current run queue. Also domain-specific statistics for SMP systems are displayed for all connected processors. Because the output format is not user-friendly, read the contents of /usr/src/linux/Documentation/scheduler/sched-stats.txt for more information.

/proc/PID/sched

Displays scheduling information on the process with id PID.

root # cat /proc/$(pidof gdm)/sched
gdm (744, #threads: 3)
-------------------------------------------------------------------
se.exec_start                                :          8888.758381
se.vruntime                                  :          6062.853815
se.sum_exec_runtime                          :             7.836043
se.statistics.wait_start                     :             0.000000
se.statistics.sleep_start                    :          8888.758381
se.statistics.block_start                    :             0.000000
se.statistics.sleep_max                      :          1965.987638
[...]
se.avg.decay_count                           :                 8477
policy                                       :                    0
prio                                         :                  120
clock-delta                                  :                  128
mm->numa_scan_seq                            :                    0
numa_migrations, 0
numa_faults_memory, 0, 0, 1, 0, -1
numa_faults_memory, 1, 0, 0, 0, -1

13.4 For More Information

To get a compact knowledge about Linux kernel task scheduling, you need to explore several information sources. Here are some:

  • For task scheduler System Calls description, see the relevant manual page (for example man 2 sched_setaffinity).

  • General information on scheduling is described in Scheduling wiki page.

  • A useful lecture on Linux scheduler policy and algorithm is available in http://www.inf.fu-berlin.de/lehre/SS01/OS/Lectures/Lecture08.pdf.

  • A good overview of Linux process scheduling is given in Linux Kernel Development by Robert Love (ISBN-10: 0-672-32512-8). See http://www.informit.com/articles/article.aspx?p=101760.

  • A very comprehensive overview of the Linux kernel internals is given in Understanding the Linux Kernel by Daniel P. Bovet and Marco Cesati (ISBN 978-0-596-00565-8).

  • Technical information about task scheduler is covered in files under /usr/src/linux/Documentation/scheduler.

14 Tuning the Memory Management Subsystem

  • Filename: tuning_memory.xml
  • ID: cha.tuning.memory

To understand and tune the memory management behavior of the kernel, it is important to first have an overview of how it works and cooperates with other subsystems.

The memory management subsystem, also called the virtual memory manager, will subsequently be called VM. The role of the VM is to manage the allocation of physical memory (RAM) for the entire kernel and user programs. It is also responsible for providing a virtual memory environment for user processes (managed via POSIX APIs with Linux extensions). Finally, the VM is responsible for freeing up RAM when there is a shortage, either by trimming caches or swapping out anonymous memory.

The most important thing to understand when examining and tuning VM is how its caches are managed. The basic goal of the VM's caches is to minimize the cost of I/O as generated by swapping and file system operations (including network file systems). This is achieved by avoiding I/O completely, or by submitting I/O in better patterns.

Free memory will be used and filled up by these caches as required. The more memory is available for caches and anonymous memory, the more effectively caches and swapping will operate. However, if a memory shortage is encountered, caches will be trimmed or memory will be swapped out.

For a particular workload, the first thing that can be done to improve performance is to increase memory and reduce the frequency that memory must be trimmed or swapped. The second thing is to change the way caches are managed by changing kernel parameters.

Finally, the workload itself should be examined and tuned as well. If an application is allowed to run more processes or threads, effectiveness of VM caches can be reduced, if each process is operating in its own area of the file system. Memory overheads are also increased. If applications allocate their own buffers or caches, larger caches will mean that less memory is available for VM caches. However, more processes and threads can mean more opportunity to overlap and pipeline I/O, and may take better advantage of multiple cores. Experimentation will be required for the best results.

14.1 Memory Usage

Memory allocations in general can be characterized as pinned (also known as unreclaimable), reclaimable or swappable.

14.1.1 Anonymous Memory

Anonymous memory tends to be program heap and stack memory (for example, >malloc()). It is reclaimable, except in special cases such as mlock or if there is no available swap space. Anonymous memory must be written to swap before it can be reclaimed. Swap I/O (both swapping in and swapping out pages) tends to be less efficient than pagecache I/O, because of allocation and access patterns.

14.1.2 Pagecache

A cache of file data. When a file is read from disk or network, the contents are stored in pagecache. No disk or network access is required, if the contents are up-to-date in pagecache. tmpfs and shared memory segments count toward pagecache.

When a file is written to, the new data is stored in pagecache before being written back to a disk or the network (making it a write-back cache). When a page has new data not written back yet, it is called dirty. Pages not classified as dirty are clean. Clean pagecache pages can be reclaimed if there is a memory shortage by simply freeing them. Dirty pages must first be made clean before being reclaimed.

14.1.3 Buffercache

This is a type of pagecache for block devices (for example, /dev/sda). A file system typically uses the buffercache when accessing its on-disk metadata structures such as inode tables, allocation bitmaps, and so forth. Buffercache can be reclaimed similarly to pagecache.

14.1.4 Buffer Heads

Buffer heads are small auxiliary structures that tend to be allocated upon pagecache access. They can generally be reclaimed easily when the pagecache or buffercache pages are clean.

14.1.5 Writeback

As applications write to files, the pagecache becomes dirty and the buffercache may become dirty. When the amount of dirty memory reaches a specified number of pages in bytes (vm.dirty_background_bytes), or when the amount of dirty memory reaches a specific ratio to total memory (vm.dirty_background_ratio), or when the pages have been dirty for longer than a specified amount of time (vm.dirty_expire_centisecs), the kernel begins writeback of pages starting with files that had the pages dirtied first. The background bytes and ratios are mutually exclusive and setting one will overwrite the other. Flusher threads perform writeback in the background and allow applications to continue running. If the I/O cannot keep up with applications dirtying pagecache, and dirty data reaches a critical setting (vm.dirty_bytes or vm.dirty_ratio), then applications begin to be throttled to prevent dirty data exceeding this threshold.

14.1.6 Readahead

The VM monitors file access patterns and may attempt to perform readahead. Readahead reads pages into the pagecache from the file system that have not been requested yet. It is done to allow fewer, larger I/O requests to be submitted (more efficient). And for I/O to be pipelined (I/O performed at the same time as the application is running).

14.1.7 VFS caches

14.1.7.1 Inode Cache

This is an in-memory cache of the inode structures for each file system. These contain attributes such as the file size, permissions and ownership, and pointers to the file data.

14.1.7.2 Directory Entry Cache

This is an in-memory cache of the directory entries in the system. These contain a name (the name of a file), the inode which it refers to, and children entries. This cache is used when traversing the directory structure and accessing a file by name.

14.2 Reducing Memory Usage

14.2.1 Reducing malloc (Anonymous) Usage

Applications running on SUSE Linux Enterprise Desktop 12 SP3 can allocate more memory compared to SUSE Linux Enterprise Desktop 10. This is because of glibc changing its default behavior while allocating user space memory. See http://www.gnu.org/s/libc/manual/html_node/Malloc-Tunable-Parameters.html for explanation of these parameters.

To restore a SUSE Linux Enterprise Desktop 10-like behavior, M_MMAP_THRESHOLD should be set to 128*1024. This can be done with mallopt() call from the application, or via setting MALLOC_MMAP_THRESHOLD environment variable before running the application.

14.2.2 Reducing Kernel Memory Overheads

Kernel memory that is reclaimable (caches, described above) will be trimmed automatically during memory shortages. Most other kernel memory cannot be easily reduced but is a property of the workload given to the kernel.

Reducing the requirements of the user space workload will reduce the kernel memory usage (fewer processes, fewer open files and sockets, etc.)

14.2.3 Memory Controller (Memory Cgroups)

If the memory cgroups feature is not needed, it can be switched off by passing cgroup_disable=memory on the kernel command line, reducing memory consumption of the kernel a bit. There is also a slight performance benefit as there is a small amount of accounting overhead when memory cgroups are available even if none are configured.

14.3 Virtual Memory Manager (VM) Tunable Parameters

When tuning the VM it should be understood that some changes will take time to affect the workload and take full effect. If the workload changes throughout the day, it may behave very differently at different times. A change that increases throughput under some conditions may decrease it under other conditions.

14.3.1 Reclaim Ratios

/proc/sys/vm/swappiness

This control is used to define how aggressively the kernel swaps out anonymous memory relative to pagecache and other caches. Increasing the value increases the amount of swapping. The default value is 60.

Swap I/O tends to be much less efficient than other I/O. However, some pagecache pages will be accessed much more frequently than less used anonymous memory. The right balance should be found here.

If swap activity is observed during slowdowns, it may be worth reducing this parameter. If there is a lot of I/O activity and the amount of pagecache in the system is rather small, or if there are large dormant applications running, increasing this value might improve performance.

Note that the more data is swapped out, the longer the system will take to swap data back in when it is needed.

/proc/sys/vm/vfs_cache_pressure

This variable controls the tendency of the kernel to reclaim the memory which is used for caching of VFS caches, versus pagecache and swap. Increasing this value increases the rate at which VFS caches are reclaimed.

It is difficult to know when this should be changed, other than by experimentation. The slabtop command (part of the package procps) shows top memory objects used by the kernel. The vfs caches are the "dentry" and the "*_inode_cache" objects. If these are consuming a large amount of memory in relation to pagecache, it may be worth trying to increase pressure. Could also help to reduce swapping. The default value is 100.

/proc/sys/vm/min_free_kbytes

This controls the amount of memory that is kept free for use by special reserves including atomic allocations (those which cannot wait for reclaim). This should not normally be lowered unless the system is being very carefully tuned for memory usage (normally useful for embedded rather than server applications). If page allocation failure messages and stack traces are frequently seen in logs, min_free_kbytes could be increased until the errors disappear. There is no need for concern, if these messages are very infrequent. The default value depends on the amount of RAM.

/proc/sys/vm/watermark_scale_factor

Broadly speaking, free memory has high, low and min watermarks. When the low watermark is reached then kswapd wakes to reclaim memory in the background. It stays awake until free memory reaches the high watermark. Applications will stall and reclaim memory when the low watermark is reached.

The watermark_scale_factor defines the amount of memory left in a node/system before kswapd is woken up and how much memory needs to be free before kswapd goes back to sleep. The unit is in fractions of 10,000. The default value of 10 means the distances between watermarks are 0.1% of the available memory in the node/system. The maximum value is 1000, or 10% of memory.

Workloads that frequently stall in direct reclaim, accounted by allocstall in /proc/vmstat, may benefit from altering this parameter. Similarly, if kswapd is sleeping prematurely, as accounted for by kswapd_low_wmark_hit_quickly, then it may indicate that the number of pages kept free to avoid stalls is too low.

14.3.2 Writeback Parameters

One important change in writeback behavior since SUSE Linux Enterprise Desktop 10 is that modification to file-backed mmap() memory is accounted immediately as dirty memory (and subject to writeback). Whereas previously it would only be subject to writeback after it was unmapped, upon an msync() system call, or under heavy memory pressure.

Some applications do not expect mmap modifications to be subject to such writeback behavior, and performance can be reduced. Berkeley DB (and applications using it) is one known example that can cause problems. Increasing writeback ratios and times can improve this type of slowdown.

/proc/sys/vm/dirty_background_ratio

This is the percentage of the total amount of free and reclaimable memory. When the amount of dirty pagecache exceeds this percentage, writeback threads start writing back dirty memory. The default value is 10 (%).

/proc/sys/vm/dirty_background_bytes

This contains the amount of dirty memory at which the background kernel flusher threads will start writeback. dirty_background_bytes is the counterpart of dirty_background_ratio. If one of them is set, the other one will automatically be read as 0.

/proc/sys/vm/dirty_ratio

Similar percentage value as for dirty_background_ratio. When this is exceeded, applications that want to write to the pagecache are blocked and wait for kernel background flusher threads to reduce the amount of dirty memory. The default value is 20 (%).

/proc/sys/vm/dirty_bytes

This file controls the same tunable as dirty_ratio however the amount of dirty memory is in bytes as opposed to a percentage of reclaimable memory. Since both dirty_ratio and dirty_bytes control the same tunable, if one of them is set, the other one will automatically be read as 0. The minimum value allowed for dirty_bytes is two pages (in bytes); any value lower than this limit will be ignored and the old configuration will be retained.

/proc/sys/vm/dirty_expire_centisecs

Data which has been dirty in-memory for longer than this interval will be written out next time a flusher thread wakes up. Expiration is measured based on the modification time of a file's inode. Therefore, multiple dirtied pages from the same file will all be written when the interval is exceeded.

dirty_background_ratio and dirty_ratio together determine the pagecache writeback behavior. If these values are increased, more dirty memory is kept in the system for a longer time. With more dirty memory allowed in the system, the chance to improve throughput by avoiding writeback I/O and to submitting more optimal I/O patterns increases. However, more dirty memory can either harm latency when memory needs to be reclaimed or at points of data integrity (synchronization points) when it needs to be written back to disk.

14.3.3 Timing Differences of I/O Writes between SUSE Linux Enterprise 12 and SUSE Linux Enterprise 11

The system is required to limit what percentage of the system's memory contains file-backed data that needs writing to disk. This guarantees that the system can always allocate the necessary data structures to complete I/O. The maximum amount of memory that can be dirty and requires writing at any time is controlled by vm.dirty_ratio (/proc/sys/vm/dirty_ratio). The defaults are:

SLE-11-SP3:     vm.dirty_ratio = 40
SLE-12:         vm.dirty_ratio = 20

The primary advantage of using the lower ratio in SUSE Linux Enterprise 12 is that page reclamation and allocation in low memory situations completes faster as there is a higher probability that old clean pages will be quickly found and discarded. The secondary advantage is that if all data on the system must be synchronized, then the time to complete the operation on SUSE Linux Enterprise 12 will be lower than SUSE Linux Enterprise 11 SP3 by default. Most workloads will not notice this change as data is synchronized with fsync() by the application or data is not dirtied quickly enough to hit the limits.

There are exceptions and if your application is affected by this, it will manifest as an unexpected stall during writes. To prove it is affected by dirty data rate limiting then monitor /proc/PID_OF_APPLICATION/stack and it will be observed that the application spends significant time in balance_dirty_pages_ratelimited. If this is observed and it is a problem, then increase the value of vm.dirty_ratio to 40 to restore the SUSE Linux Enterprise 11 SP3 behavior.

It is important to note that the overall I/O throughput is the same regardless of the setting. The only difference is the timing of when the I/O is queued.

This is an example of using dd to asynchronously write 30% of memory to disk which would happen to be affected by the change in vm.dirty_ratio:

root # MEMTOTAL_MBYTES=`free -m | grep Mem: | awk '{print $2}'`
root # sysctl vm.dirty_ratio=40
root # dd if=/dev/zero of=zerofile ibs=1048576 count=$((MEMTOTAL_MBYTES*30/100))
2507145216 bytes (2.5 GB) copied, 8.00153 s, 313 MB/s
root # sysctl vm.dirty_ratio=20
dd if=/dev/zero of=zerofile ibs=1048576 count=$((MEMTOTAL_MBYTES*30/100))
2507145216 bytes (2.5 GB) copied, 10.1593 s, 247 MB/s

Note that the parameter affects the time it takes for the command to complete and the apparent write speed of the device. With dirty_ratio=40, more of the data is cached and written to disk in the background by the kernel. It is very important to note that the speed of I/O is identical in both cases. To demonstrate, this is the result when dd synchronizes the data before exiting:

root # sysctl vm.dirty_ratio=40
root # dd if=/dev/zero of=zerofile ibs=1048576 count=$((MEMTOTAL_MBYTES*30/100)) conv=fdatasync
2507145216 bytes (2.5 GB) copied, 21.0663 s, 119 MB/s
root # sysctl vm.dirty_ratio=20
root # dd if=/dev/zero of=zerofile ibs=1048576 count=$((MEMTOTAL_MBYTES*30/100)) conv=fdatasync
2507145216 bytes (2.5 GB) copied, 21.7286 s, 115 MB/s

Note that dirty_ratio had almost no impact here and is within the natural variability of a command. Hence, dirty_ratio does not directly impact I/O performance but it may affect the apparent performance of a workload that writes data asynchronously without synchronizing.

14.3.4 Readahead Parameters

/sys/block/<bdev>/queue/read_ahead_kb

If one or more processes are sequentially reading a file, the kernel reads some data in advance (ahead) to reduce the amount of time that processes need to wait for data to be available. The actual amount of data being read in advance is computed dynamically, based on how much "sequential" the I/O seems to be. This parameter sets the maximum amount of data that the kernel reads ahead for a single file. If you observe that large sequential reads from a file are not fast enough, you can try increasing this value. Increasing it too far may result in readahead thrashing where pagecache used for readahead is reclaimed before it can be used, or slowdowns because of a large amount of useless I/O. The default value is 512 (KB).

14.3.5 Transparent Huge Page Parameters

Transparent Huge Pages (THP) provide a way to dynamically allocate huge pages either on‑demand by the process or deferring the allocation until later via the khugepaged kernel thread. This method is distinct from the use of hugetlbfs to manually manage their allocation and use. Workloads with contiguous memory access patterns can benefit greatly from THP. A 1000-fold decrease in page faults can be observed when running synthetic workloads with contiguous memory access patterns.

There are cases when THP may be undesirable. Workloads with sparse memory access patterns can perform poorly with THP due to excessive memory usage. For example, 2 MB of memory may be used at fault time instead of 4 KB for each fault and ultimately lead to premature page reclaim. On releases older than SUSE Linux Enterprise 12 SP2, it was possible for an application to stall for long periods of time trying to allocate a THP which frequently led to a recommendation of disabling THP. Such recommendations should be re-evaluated for SUSE Linux Enterprise 12 SP3

The behavior of THP may be configured via the transparent_hugepage= kernel parameter or via sysfs. For example, it may be disabled by adding the kernel parameter transparent_hugepage=never, rebuilding your grub2 configuration, and rebooting. Verify if THP is disabled with:

root # cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]

If disabled, the value never is shown in square brackets like in the example above. A value of always will always try and use THP at fault time but defer to khugepaged if the allocation fails. A value of madvise will only allocate THP for address spaces explicitly specified by an application.

/sys/kernel/mm/transparent_hugepage/defrag

This parameter controls how much effort an application commits when allocating a THP. A value of always is the default for SUSE Linux Enterprise 12 SP1 and earlier releases that supported THP. If a THP is not available, the application will try to defragment memory. It potentially incurs large stalls in an application if the memory is fragmented and a THP is not available.

A value of madvise means that THP allocation requests will only defragment if the application explicitly requests it. This is the default for SUSE Linux Enterprise 12 SP2 and later releases.

defer is only available on SUSE Linux Enterprise 12 SP2 and later releases. If a THP is not available, the application will fall back to using small pages if a THP is not available. It will wake the kswapd and kcompactd kernel threads to defragment memory in the background and a THP will be allocated later by khugepaged.

The final option never will use small pages if a THP is unavailable but no other action will take place.

14.3.6 khugepaged Parameters

khugepaged will be automatically started when transparent_hugepage is set to always or madvise, and it will be automatically shut down if it is set to never. Normally this runs at low frequency but the behavior can be tuned.

/sys/kernel/mm/transparent_hugepage/khugepaged/defrag

A value of 0 will disable khugepaged even though THP may still be used at fault time. This may be important for latency-sensitive applications that benefit from THP but cannot tolerate a stall if khugepaged tries to update an application memory usage.

/sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan

This parameter controls how many pages are scanned by khugepaged in a single pass. A scan identifies small pages that can be reallocated as THP. Increasing this value will allocate THP in the background faster at the cost of CPU usage.

/sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs

khugepaged sleeps for a short interval specified by this parameter after each pass to limit how much CPU usage is used. Reducing this value will allocate THP in the background faster at the cost of CPU usage. A value of 0 will force continual scanning.

/sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs

This parameter controls how long khugepaged will sleep in the event it fails to allocate a THP in the background waiting for kswapd and kcompactd to take action.

The remaining parameters for khugepaged are rarely useful for performance tuning but are fully documented in /usr/src/linux/Documentation/vm/transhuge.txt

14.3.7 Further VM Parameters

For the complete list of the VM tunable parameters, see /usr/src/linux/Documentation/sysctl/vm.txt (available after having installed the kernel-source package).

14.4 Monitoring VM Behavior

Some simple tools that can help monitor VM behavior:

  1. vmstat: This tool gives a good overview of what the VM is doing. See Section 2.1.1, “vmstat for details.

  2. /proc/meminfo: This file gives a detailed breakdown of where memory is being used. See Section 2.4.2, “Detailed Memory Usage: /proc/meminfo for details.

  3. slabtop: This tool provides detailed information about kernel slab memory usage. buffer_head, dentry, inode_cache, ext3_inode_cache, etc. are the major caches. This command is available with the package procps.

  4. /proc/vmstat: This file gives a detailed breakdown of internal VM behaviour. The information contained within is implementation specific and may not always be available. Some information is duplicated in /proc/meminfo and other can be presented in a friendly fashion by utilities. For maximum utility, this file needs to be monitored over time to observe rates of change. The most important pieces of information that are hard to derive from other sources are as follows:

    pgscan_kswapd_*, pgsteal_kswapd_*

    These report respectively the number of pages scanned and reclaimed by kswapd since the system started. The ratio between these values can be interpreted as the reclaim efficiency with a low efficiency implying that the system is struggling to reclaim memory and may be thrashing. Light activity here is generally not something to be concerned with.

    pgscan_direct_*, pgsteal_direct_*

    These report respectively the number of pages scanned and reclaimed by an application directly. This is correlated with increases in the allocstall counter. This is more serious than kswapd activity as these events indicate that processes are stalling. Heavy activity here combined with kswapd and high rates of pgpgin, pgpout and/or high rates of pswapin or pswpout are signs that a system is thrashing heavily.

    More detailed information can be obtained using tracepoints.

    thp_fault_alloc, thp_fault_fallback

    These counters correspond to how many THPs were allocated directly by an application and how many times a THP was not available and small pages were used. Generally a high fallback rate is harmless unless the application is very sensitive to TLB pressure.

    thp_collapse_alloc, thp_collapse_alloc_failed

    These counters correspond to how many THPs were allocated by khugepaged and how many times a THP was not available and small pages were used. A high fallback rate implies that the system is fragmented and THPs are not being used even when the memory usage by applications would allow them. It is only a problem for applications that are sensitive to TLB pressure.

    compact_*_scanned, compact_stall, compact_fail, compact_success

    These counters may increase when THP is enabled and the system is fragmented. compact_stall is incremented when an application stalls allocating THP. The remaining counters account for pages scanned, the number of defragmentation events that succeeded or failed.

15 Tuning the Network

  • Filename: tuning_network.xml
  • ID: cha.tuning.network

The network subsystem is complex and its tuning highly depends on the system use scenario and on external factors such as software clients or hardware components (switches, routers, or gateways) in your network. The Linux kernel aims more at reliability and low latency than low overhead and high throughput. Other settings can mean less security, but better performance.

15.1 Configurable Kernel Socket Buffers

Networking is largely based on the TCP/IP protocol and a socket interface for communication; for more information about TCP/IP, see Chapter 17, Basic Networking. The Linux kernel handles data it receives or sends via the socket interface in socket buffers. These kernel socket buffers are tunable.

Important
Important: TCP Autotuning

Since kernel version 2.6.17 full autotuning with 4 MB maximum buffer size exists. This means that manual tuning usually will not improve networking performance considerably. It is often the best not to touch the following variables, or, at least, to check the outcome of tuning efforts carefully.

If you update from an older kernel, it is recommended to remove manual TCP tunings in favor of the autotuning feature.

The special files in the /proc file system can modify the size and behavior of kernel socket buffers; for general information about the /proc file system, see Section 2.6, “The /proc File System”. Find networking related files in:

/proc/sys/net/core
/proc/sys/net/ipv4
/proc/sys/net/ipv6

General net variables are explained in the kernel documentation (linux/Documentation/sysctl/net.txt). Special ipv4 variables are explained in linux/Documentation/networking/ip-sysctl.txt and linux/Documentation/networking/ipvs-sysctl.txt.

In the /proc file system, for example, it is possible to either set the Maximum Socket Receive Buffer and Maximum Socket Send Buffer for all protocols, or both these options for the TCP protocol only (in ipv4) and thus overriding the setting for all protocols (in core).

/proc/sys/net/ipv4/tcp_moderate_rcvbuf

If /proc/sys/net/ipv4/tcp_moderate_rcvbuf is set to 1, autotuning is active and buffer size is adjusted dynamically.

/proc/sys/net/ipv4/tcp_rmem

The three values setting the minimum, initial, and maximum size of the Memory Receive Buffer per connection. They define the actual memory usage, not only TCP window size.

/proc/sys/net/ipv4/tcp_wmem

The same as tcp_rmem, but for Memory Send Buffer per connection.

/proc/sys/net/core/rmem_max

Set to limit the maximum receive buffer size that applications can request.

/proc/sys/net/core/wmem_max

Set to limit the maximum send buffer size that applications can request.

Via /proc it is possible to disable TCP features that you do not need (all TCP features are switched on by default). For example, check the following files:

/proc/sys/net/ipv4/tcp_timestamps

TCP time stamps are defined in RFC1323.

/proc/sys/net/ipv4/tcp_window_scaling

TCP window scaling is also defined in RFC1323.

/proc/sys/net/ipv4/tcp_sack

Select acknowledgments (SACKS).

Use sysctl to read or write variables of the /proc file system. sysctl is preferable to cat (for reading) and echo (for writing), because it also reads settings from /etc/sysctl.conf and, thus, those settings survive reboots reliably. With sysctl you can read all variables and their values easily; as root use the following command to list TCP related settings:

sysctl -a | grep tcp
Note
Note: Side-Effects of Tuning Network Variables

Tuning network variables can affect other system resources such as CPU or memory use.

15.2 Detecting Network Bottlenecks and Analyzing Network Traffic

Before starting with network tuning, it is important to isolate network bottlenecks and network traffic patterns. There are some tools that can help you with detecting those bottlenecks.

The following tools can help analyzing your network traffic: netstat, tcpdump, and wireshark. Wireshark is a network traffic analyzer.

15.3 Netfilter

The Linux firewall and masquerading features are provided by the Netfilter kernel modules. This is a highly configurable rule based framework. If a rule matches a packet, Netfilter accepts or denies it or takes special action (target) as defined by rules such as address translation.

There are quite a lot of properties Netfilter can take into account. Thus, the more rules are defined, the longer packet processing may last. Also advanced connection tracking could be rather expensive and, thus, slowing down overall networking.

When the kernel queue becomes full, all new packets are dropped, causing existing connections to fail. The 'fail-open' feature allows a user to temporarily disable the packet inspection and maintain the connectivity under heavy network traffic. For reference, see https://home.regit.org/netfilter-en/using-nfqueue-and-libnetfilter_queue/.

For more information, see the home page of the Netfilter and iptables project, http://www.netfilter.org

15.4 Improving the Network Performance with Receive Packet Steering (RPS)

Modern network interface devices can move so many packets that the host can become the limiting factor for achieving maximum performance. To keep up, the system must be able to distribute the work across multiple CPU cores.

Some modern network interfaces can help distribute the work to multiple CPU cores through the implementation of multiple transmission and multiple receive queues in hardware. However, others are only equipped with a single queue and the driver must deal with all incoming packets in a single, serialized stream. To work around this issue, the operating system must "parallelize" the stream to distribute the work across multiple CPUs. On SUSE Linux Enterprise Desktop this is done via Receive Packet Steering (RPS). RPS can also be used in virtual environments.

RPS creates a unique hash for each data stream using IP addresses and port numbers. The use of this hash ensures that packets for the same data stream are sent to the same CPU, which helps to increase performance.

RPS is configured per network device receive queue and interface. The configuration file names match the following scheme:

/sys/class/net/<device>/queues/<rx-queue>/rps_cpus

<device> stands for the network device, such as eth0, eth1. <rx-queue> stands for the receive queue, such as rx-0, rx-1.

If the network interface hardware only supports a single receive queue, only rx-0 will exist. If it supports multiple receive queues, there will be an rx-N directory for each receive queue.

These configuration files contain a comma-delimited list of CPU bitmaps. By default, all bits are set to 0. With this setting RPS is disabled and therefore the CPU that handles the interrupt will also process the packet queue.

To enable RPS and enable specific CPUs to process packets for the receive queue of the interface, set the value of their positions in the bitmap to 1. For example, to enable CPUs 0-3 to process packets for the first receive queue for eth0, set the bit positions 0-3 to 1 in binary: 00001111. This representation then needs to be converted to hex—which results in F in this case. Set this hex value with the following command:

echo "f" > /sys/class/net/eth0/queues/rx-0/rps_cpus

If you wanted to enable CPUs 8-15:

1111 1111 0000 0000 (binary)
15     15    0    0 (decimal)
F       F    0    0 (hex)

The command to set the hex value of ff00 would be:

echo "ff00" > /sys/class/net/eth0/queues/rx-0/rps_cpus

On NUMA machines, best performance can be achieved by configuring RPS to use the CPUs on the same NUMA node as the interrupt for the interface's receive queue.

On non-NUMA machines, all CPUs can be used. If the interrupt rate is very high, excluding the CPU handling the network interface can boost performance. The CPU being used for the network interface can be determined from /proc/interrupts. For example:

root # cat /proc/interrupts
            CPU0       CPU1       CPU2       CPU3
...
  51:  113915241          0          0          0      Phys-fasteoi   eth0
...

In this case, CPU 0 is the only CPU processing interrupts for eth0, since only CPU0 contains a non-zero value.

On x86 and AMD64/Intel 64 platforms, irqbalance can be used to distribute hardware interrupts across CPUs. See man 1 irqbalance for more details.

15.5 For More Information

Part VI Handling System Dumps

16 Tracing Tools

SUSE Linux Enterprise Desktop comes with several tools that help you obtain useful information about your system. You can use the information for various purposes, for example, to debug and find problems in your program, to discover places causing performance drops, or to trace a running process to …

17 Kexec and Kdump

Kexec is a tool to boot to another kernel from the currently running one. You can perform faster system reboots without any hardware initialization. You can also prepare the system to boot to another kernel if the system crashes.

16 Tracing Tools

  • Filename: tuning_tracing.xml
  • ID: cha.tuning.tracing

SUSE Linux Enterprise Desktop comes with several tools that help you obtain useful information about your system. You can use the information for various purposes, for example, to debug and find problems in your program, to discover places causing performance drops, or to trace a running process to find out what system resources it uses. Most of the tools are part of the installation media. In some cases, they need to be installed from the SUSE Software Development Kit, which is a separate download.

Note
Note: Tracing and Impact on Performance

While a running process is being monitored for system or library calls, the performance of the process is heavily reduced. You are advised to use tracing tools only for the time you need to collect the data.

16.1 Tracing System Calls with strace

The strace command traces system calls of a process and signals received by the process. strace can either run a new command and trace its system calls, or you can attach strace to an already running command. Each line of the command's output contains the system call name, followed by its arguments in parentheses and its return value.

To run a new command and start tracing its system calls, enter the command to be monitored as you normally do, and add strace at the beginning of the command line:

tux > strace ls
execve("/bin/ls", ["ls"], [/* 52 vars */]) = 0
brk(0)                                  = 0x618000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) \
        = 0x7f9848667000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) \
        = 0x7f9848666000
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT \
(No such file or directory)
open("/etc/ld.so.cache", O_RDONLY)      = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=200411, ...}) = 0
mmap(NULL, 200411, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f9848635000
close(3)                                = 0
open("/lib64/librt.so.1", O_RDONLY)     = 3
[...]
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) \
= 0x7fd780f79000
write(1, "Desktop\nDocuments\nbin\ninst-sys\n", 31Desktop
Documents
bin
inst-sys
) = 31
close(1)                                = 0
munmap(0x7fd780f79000, 4096)            = 0
close(2)                                = 0
exit_group(0)                           = ?

To attach strace to an already running process, you need to specify the -p with the process ID (PID) of the process that you want to monitor:

tux > strace -p `pidof cron`
 Process 1261 attached
 restart_syscall(<... resuming interrupted call ...>) = 0
  stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2309, ...}) = 0
  select(5, [4], NULL, NULL, {0, 0})      = 0 (Timeout)
  socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 5
  connect(5, {sa_family=AF_LOCAL, sun_path="/var/run/nscd/socket"}, 110) = 0
  sendto(5, "\2\0\0\0\0\0\0\0\5\0\0\0root\0", 17, MSG_NOSIGNAL, NULL, 0) = 17
  poll([{fd=5, events=POLLIN|POLLERR|POLLHUP}], 1, 5000) = 1 ([{fd=5, revents=POLLIN|POLLHUP}])
  read(5, "\2\0\0\0\1\0\0\0\5\0\0\0\2\0\0\0\0\0\0\0\0\0\0\0\5\0\0\0\6\0\0\0"..., 36) = 36
  read(5, "root\0x\0root\0/root\0/bin/bash\0", 28) = 28
  close(5)                                = 0
  rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
  rt_sigaction(SIGCHLD, NULL, {0x7f772b9ea890, [], SA_RESTORER|SA_RESTART, 0x7f772adf7880}, 8) = 0
  rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
  nanosleep({60, 0}, 0x7fff87d8c580)      = 0
  stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2309, ...}) = 0
  select(5, [4], NULL, NULL, {0, 0})      = 0 (Timeout)
  socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 5
  connect(5, {sa_family=AF_LOCAL, sun_path="/var/run/nscd/socket"}, 110) = 0
  sendto(5, "\2\0\0\0\0\0\0\0\5\0\0\0root\0", 17, MSG_NOSIGNAL, NULL, 0) = 17
  poll([{fd=5, events=POLLIN|POLLERR|POLLHUP}], 1, 5000) = 1 ([{fd=5, revents=POLLIN|POLLHUP}])
  read(5, "\2\0\0\0\1\0\0\0\5\0\0\0\2\0\0\0\0\0\0\0\0\0\0\0\5\0\0\0\6\0\0\0"..., 36) = 36
  read(5, "root\0x\0root\0/root\0/bin/bash\0", 28) = 28
  close(5)
  [...]

The -e option understands several sub-options and arguments. For example, to trace all attempts to open or write to a particular file, use the following:

tux > strace -e trace=open,write ls ~
open("/etc/ld.so.cache", O_RDONLY)       = 3
open("/lib64/librt.so.1", O_RDONLY)      = 3
open("/lib64/libselinux.so.1", O_RDONLY) = 3
open("/lib64/libacl.so.1", O_RDONLY)     = 3
open("/lib64/libc.so.6", O_RDONLY)       = 3
open("/lib64/libpthread.so.0", O_RDONLY) = 3
[...]
open("/usr/lib/locale/cs_CZ.utf8/LC_CTYPE", O_RDONLY) = 3
open(".", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
write(1, "addressbook.db.bak\nbin\ncxoffice\n"..., 311) = 311

To trace only network related system calls, use -e trace=network:

tux > strace -e trace=network -p 26520
Process 26520 attached - interrupt to quit
socket(PF_NETLINK, SOCK_RAW, 0)         = 50
bind(50, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 0
getsockname(50, {sa_family=AF_NETLINK, pid=26520, groups=00000000}, \
[12]) = 0
sendto(50, "\24\0\0\0\26\0\1\3~p\315K\0\0\0\0\0\0\0\0", 20, 0,
{sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 20
[...]

The -c calculates the time the kernel spent on each system call:

tux > strace -c find /etc -name xorg.conf
/etc/X11/xorg.conf
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 32.38    0.000181         181         1           execve
 22.00    0.000123           0       576           getdents64
 19.50    0.000109           0       917        31 open
 19.14    0.000107           0       888           close
  4.11    0.000023           2        10           mprotect
  0.00    0.000000           0         1           write
[...]
  0.00    0.000000           0         1           getrlimit
  0.00    0.000000           0         1           arch_prctl
  0.00    0.000000           0         3         1 futex
  0.00    0.000000           0         1           set_tid_address
  0.00    0.000000           0         4           fadvise64
  0.00    0.000000           0         1           set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00    0.000559                  3633        33 total

To trace all child processes of a process, use -f:

tux > strace -f rcapache2 status
execve("/usr/sbin/rcapache2", ["rcapache2", "status"], [/* 81 vars */]) = 0
brk(0)                                  = 0x69e000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) \
= 0x7f3bb553b000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) \
= 0x7f3bb553a000
[...]
[pid  4823] rt_sigprocmask(SIG_SETMASK, [],  <unfinished ...>
[pid  4822] close(4 <unfinished ...>
[pid  4823] <... rt_sigprocmask resumed> NULL, 8) = 0
[pid  4822] <... close resumed> )       = 0
[...]
[pid  4825] mprotect(0x7fc42cbbd000, 16384, PROT_READ) = 0
[pid  4825] mprotect(0x60a000, 4096, PROT_READ) = 0
[pid  4825] mprotect(0x7fc42cde4000, 4096, PROT_READ) = 0
[pid  4825] munmap(0x7fc42cda2000, 261953) = 0
[...]
[pid  4830] munmap(0x7fb1fff10000, 261953) = 0
[pid  4830] rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
[pid  4830] open("/dev/tty", O_RDWR|O_NONBLOCK) = 3
[pid  4830] close(3)
[...]
read(255, "\n\n# Inform the caller not only v"..., 8192) = 73
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
exit_group(0)

If you need to analyze the output of strace and the output messages are too long to be inspected directly in the console window, use -o. In that case, unnecessary messages, such as information about attaching and detaching processes, are suppressed. You can also suppress these messages (normally printed on the standard output) with -q. To add time stamps at the beginning of each line with a system call, use -t:

tux > strace -t -o strace_sleep.txt sleep 1; more strace_sleep.txt
08:44:06 execve("/bin/sleep", ["sleep", "1"], [/* 81 vars */]) = 0
08:44:06 brk(0)                         = 0x606000
08:44:06 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, \
-1, 0) = 0x7f8e78cc5000
[...]
08:44:06 close(3)                       = 0
08:44:06 nanosleep({1, 0}, NULL)        = 0
08:44:07 close(1)                       = 0
08:44:07 close(2)                       = 0
08:44:07 exit_group(0)                  = ?

The behavior and output format of strace can be largely controlled. For more information, see the relevant manual page (man 1 strace).

16.2 Tracing Library Calls with ltrace

ltrace traces dynamic library calls of a process. It is used in a similar way to strace, and most of their parameters have a very similar or identical meaning. By default, ltrace uses /etc/ltrace.conf or ~/.ltrace.conf configuration files. You can, however, specify an alternative one with the -F CONFIG_FILE option.

In addition to library calls, ltrace with the -S option can trace system calls as well:

tux > ltrace -S -o ltrace_find.txt find /etc -name \
xorg.conf; more ltrace_find.txt
SYS_brk(NULL)                                              = 0x00628000
SYS_mmap(0, 4096, 3, 34, 0xffffffff)                       = 0x7f1327ea1000
SYS_mmap(0, 4096, 3, 34, 0xffffffff)                       = 0x7f1327ea0000
[...]
fnmatch("xorg.conf", "xorg.conf", 0)                       = 0
free(0x0062db80)                                           = <void>
__errno_location()                                         = 0x7f1327e5d698
__ctype_get_mb_cur_max(0x7fff25227af0, 8192, 0x62e020, -1, 0) = 6
__ctype_get_mb_cur_max(0x7fff25227af0, 18, 0x7f1327e5d6f0, 0x7fff25227af0,
0x62e031) = 6
__fprintf_chk(0x7f1327821780, 1, 0x420cf7, 0x7fff25227af0, 0x62e031
<unfinished ...>
SYS_fstat(1, 0x7fff25227230)                               = 0
SYS_mmap(0, 4096, 3, 34, 0xffffffff)                       = 0x7f1327e72000
SYS_write(1, "/etc/X11/xorg.conf\n", 19)                   = 19
[...]

You can change the type of traced events with the -e option. The following example prints library calls related to fnmatch and strlen functions:

tux > ltrace -e fnmatch,strlen find /etc -name xorg.conf
[...]
fnmatch("xorg.conf", "xorg.conf", 0)             = 0
strlen("Xresources")                             = 10
strlen("Xresources")                             = 10
strlen("Xresources")                             = 10
fnmatch("xorg.conf", "Xresources", 0)            = 1
strlen("xorg.conf.install")                      = 17
[...]

To display only the symbols included in a specific library, use -l /path/to/library:

tux > ltrace -l /lib64/librt.so.1 sleep 1
clock_gettime(1, 0x7fff4b5c34d0, 0, 0, 0)                  = 0
clock_gettime(1, 0x7fff4b5c34c0, 0xffffffffff600180, -1, 0) = 0
+++ exited (status 0) +++

You can make the output more readable by indenting each nested call by the specified number of space with the -n NUM_OF_SPACES.

16.3 Debugging and Profiling with Valgrind

Valgrind is a set of tools to debug and profile your programs so that they can run both faster and with less errors. Valgrind can detect problems related to memory management and threading, or can also serve as a framework for building new debugging tools. It is well known that this tool can incur high overhead, causing, for example, higher runtimes or changing the normal program behavior under concurrent workloads based on timing.

16.3.1 Installation

Valgrind is not shipped with standard SUSE Linux Enterprise Desktop distribution. To install it on your system, you need to obtain SUSE Software Development Kit, and either install it and run

zypper install VALGRIND

or browse through the SUSE Software Development Kit directory tree, locate the Valgrind package and install it with

rpm -i valgrind-VERSION_ARCHITECTURE.rpm

The SDK is a module for SUSE Linux Enterprise and is available via an online channel from the SUSE Customer Center. Alternatively download it from http://download.suse.com/. (Search for SUSE Linux Enterprise Software Development Kit). Refer to Chapter 11, Installing Modules, Extensions, and Third Party Add-On Products for details.

16.3.2 Supported Architectures

SUSE Linux Enterprise Desktop supports Valgrind on the following architectures:

  • AMD64/Intel 64

  • POWER

  • z Systems

16.3.3 General Information

The main advantage of Valgrind is that it works with existing compiled executables. You do not need to recompile or modify your programs to use it. Run Valgrind like this:

valgrind VALGRIND_OPTIONS your-prog YOUR-PROGRAM-OPTIONS

Valgrind consists of several tools, and each provides specific functionality. Information in this section is general and valid regardless of the used tool. The most important configuration option is --tool. This option tells Valgrind which tool to run. If you omit this option, memcheck is selected by default. For example, to run find ~ -name .bashrc with Valgrind's memcheck tools, enter the following in the command line:

valgrind --tool=memcheck find ~ -name .bashrc

A list of standard Valgrind tools with a brief description follows:

memcheck

Detects memory errors. It helps you tune your programs to behave correctly.

cachegrind

Profiles cache prediction. It helps you tune your programs to run faster.

callgrind

Works in a similar way to cachegrind but also gathers additional cache-profiling information.

exp-drd

Detects thread errors. It helps you tune your multi-threaded programs to behave correctly.

helgrind

Another thread error detector. Similar to exp-drd but uses different techniques for problem analysis.

massif

A heap profiler. Heap is an area of memory used for dynamic memory allocation. This tool helps you tune your program to use less memory.

lackey

An example tool showing instrumentation basics.

16.3.4 Default Options

Valgrind can read options at start-up. There are three places which Valgrind checks:

  1. The file .valgrindrc in the home directory of the user who runs Valgrind.

  2. The environment variable $VALGRIND_OPTS

  3. The file .valgrindrc in the current directory where Valgrind is run from.

These resources are parsed exactly in this order, while later given options take precedence over earlier processed options. Options specific to a particular Valgrind tool must be prefixed with the tool name and a colon. For example, if you want cachegrind to always write profile data to the /tmp/cachegrind_PID.log, add the following line to the .valgrindrc file in your home directory:

--cachegrind:cachegrind-out-file=/tmp/cachegrind_%p.log

16.3.5 How Valgrind Works

Valgrind takes control of your executable before it starts. It reads debugging information from the executable and related shared libraries. The executable's code is redirected to the selected Valgrind tool, and the tool adds its own code to handle its debugging. Then the code is handed back to the Valgrind core and the execution continues.

For example, memcheck adds its code, which checks every memory access. As a consequence, the program runs much slower than in the native execution environment.

Valgrind simulates every instruction of your program. Therefore, it not only checks the code of your program, but also all related libraries (including the C library), libraries used for graphical environment, and so on. If you try to detect errors with Valgrind, it also detects errors in associated libraries (like C, X11, or Gtk libraries). Because you probably do not need these errors, Valgrind can selectively, suppress these error messages to suppression files. The --gen-suppressions=yes tells Valgrind to report these suppressions which you can copy to a file.

You should supply a real executable (machine code) as a Valgrind argument. If your application is run, for example, from a shell or Perl script, you will by mistake get error reports related to /bin/sh (or /usr/bin/perl). In such cases, you can use --trace-children=yes to work around this issue. However, using the executable itself will avoid any confusion over this issue.

16.3.6 Messages

During its runtime, Valgrind reports messages with detailed errors and important events. The following example explains the messages:

tux > valgrind --tool=memcheck find ~ -name .bashrc
[...]
==6558== Conditional jump or move depends on uninitialised value(s)
==6558==    at 0x400AE79: _dl_relocate_object (in /lib64/ld-2.11.1.so)
==6558==    by 0x4003868: dl_main (in /lib64/ld-2.11.1.so)
[...]
==6558== Conditional jump or move depends on uninitialised value(s)
==6558==    at 0x400AE82: _dl_relocate_object (in /lib64/ld-2.11.1.so)
==6558==    by 0x4003868: dl_main (in /lib64/ld-2.11.1.so)
[...]
==6558== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
==6558== malloc/free: in use at exit: 2,228 bytes in 8 blocks.
==6558== malloc/free: 235 allocs, 227 frees, 489,675 bytes allocated.
==6558== For counts of detected errors, rerun with: -v
==6558== searching for pointers to 8 not-freed blocks.
==6558== checked 122,584 bytes.
==6558==
==6558== LEAK SUMMARY:
==6558==    definitely lost: 0 bytes in 0 blocks.
==6558==      possibly lost: 0 bytes in 0 blocks.
==6558==    still reachable: 2,228 bytes in 8 blocks.
==6558==         suppressed: 0 bytes in 0 blocks.
==6558== Rerun with --leak-check=full to see details of leaked memory.

The ==6558== introduces Valgrind's messages and contains the process ID number (PID). You can easily distinguish Valgrind's messages from the output of the program itself, and decide which messages belong to a particular process.

To make Valgrind's messages more detailed, use -v or even -v -v.

You can make Valgrind send its messages to three different places:

  1. By default, Valgrind sends its messages to the file descriptor 2, which is the standard error output. You can tell Valgrind to send its messages to any other file descriptor with the --log-fd=FILE_DESCRIPTOR_NUMBER option.

  2. The second and probably more useful way is to send Valgrind's messages to a file with --log-file=FILENAME. This option accepts several variables, for example, %p gets replaced with the PID of the currently profiled process. This way you can send messages to different files based on their PID. %q{env_var} is replaced with the value of the related env_var environment variable.

    The following example checks for possible memory errors during the Apache Web server restart, while following children processes and writing detailed Valgrind's messages to separate files distinguished by the current process PID:

    tux > valgrind -v --tool=memcheck --trace-children=yes \
    --log-file=valgrind_pid_%p.log rcapache2 restart

    This process created 52 log files in the testing system, and took 75 seconds instead of the usual 7 seconds needed to run sudo systemctl restart apache2 without Valgrind, which is approximately 10 times more.

    tux > ls -1 valgrind_pid_*log
    valgrind_pid_11780.log
    valgrind_pid_11782.log
    valgrind_pid_11783.log
    [...]
    valgrind_pid_11860.log
    valgrind_pid_11862.log
    valgrind_pid_11863.log
  3. You may also prefer to send the Valgrind's messages over the network. You need to specify the aa.bb.cc.dd IP address and port_num port number of the network socket with the --log-socket=AA.BB.CC.DD:PORT_NUM option. If you omit the port number, 1500 will be used.

    It is useless to send Valgrind's messages to a network socket if no application is capable of receiving them on the remote machine. That is why valgrind-listener, a simple listener, is shipped together with Valgrind. It accepts connections on the specified port and copies everything it receives to the standard output.

16.3.7 Error Messages

Valgrind remembers all error messages, and if it detects a new error, the error is compared against old error messages. This way Valgrind checks for duplicate error messages. In case of a duplicate error, it is recorded but no message is shown. This mechanism prevents you from being overwhelmed by millions of duplicate errors.

The -v option will add a summary of all reports (sorted by their total count) to the end of the Valgrind's execution output. Moreover, Valgrind stops collecting errors if it detects either 1000 different errors, or 10 000 000 errors in total. If you want to suppress this limit and wish to see all error messages, use --error-limit=no.

Some errors usually cause other ones. Therefore, fix errors in the same order as they appear and re-check the program continuously.

16.4 For More Information

  • For a complete list of options related to the described tracing tools, see the corresponding man page (man 1 strace, man 1 ltrace, and man 1 valgrind).

  • To describe advanced usage of Valgrind is beyond the scope of this document. It is very well documented, see Valgrind User Manual. These pages are indispensable if you need more advanced information on Valgrind or the usage and purpose of its standard tools.

17 Kexec and Kdump

  • Filename: tuning_kexec.xml
  • ID: cha.tuning.kexec

Kexec is a tool to boot to another kernel from the currently running one. You can perform faster system reboots without any hardware initialization. You can also prepare the system to boot to another kernel if the system crashes.

17.1 Introduction

With Kexec, you can replace the running kernel with another one without a hard reboot. The tool is useful for several reasons:

  • Faster system rebooting

    If you need to reboot the system frequently, Kexec can save you significant time.

  • Avoiding unreliable firmware and hardware

    Computer hardware is complex and serious problems may occur during the system start-up. You cannot always replace unreliable hardware immediately. Kexec boots the kernel to a controlled environment with the hardware already initialized. The risk of unsuccessful system start is then minimized.

  • Saving the dump of a crashed kernel

    Kexec preserves the contents of the physical memory. After the production kernel fails, the capture kernel (an additional kernel running in a reserved memory range) saves the state of the failed kernel. The saved image can help you with the subsequent analysis.

  • Booting without GRUB 2 configuration

    When the system boots a kernel with Kexec, it skips the boot loader stage. The normal booting procedure can fail because of an error in the boot loader configuration. With Kexec, you do not depend on a working boot loader configuration.

17.2 Required Packages

To use Kexec on SUSE® Linux Enterprise Desktop to speed up reboots or avoid potential hardware problems, make sure that the package kexec-tools is installed. It contains a script called kexec-bootloader, which reads the boot loader configuration and runs Kexec using the same kernel options as the normal boot loader.

To set up an environment that helps you obtain debug information in case of a kernel crash, make sure that the package makedumpfile is installed.

The preferred method of using Kdump in SUSE Linux Enterprise Desktop is through the YaST Kdump module. To use the YaST module, make sure that the package yast2-kdump is installed.

17.3 Kexec Internals

The most important component of Kexec is the /sbin/kexec command. You can load a kernel with Kexec in two different ways:

  • Load the kernel to the address space of a production kernel for a regular reboot:

    root # kexec -l KERNEL_IMAGE

    You can later boot to this kernel with kexec -e.

  • Load the kernel to a reserved area of memory:

    root # kexec -p KERNEL_IMAGE

    This kernel will be booted automatically when the system crashes.

If you want to boot another kernel and preserve the data of the production kernel when the system crashes, you need to reserve a dedicated area of the system memory. The production kernel never loads to this area because it must be always available. It is used for the capture kernel so that the memory pages of the production kernel can be preserved.

To reserve the area, append the option crashkernel to the boot command line of the production kernel. To determine the necessary values for crashkernel, follow the instructions in Section 17.4, “Calculating crashkernel Allocation Size”.

Note that this is not a parameter of the capture kernel. The capture kernel does not use Kexec.

The capture kernel is loaded to the reserved area and waits for the kernel to crash. Then, Kdump tries to invoke the capture kernel because the production kernel is no longer reliable at this stage. This means that even Kdump can fail.

To load the capture kernel, you need to include the kernel boot parameters. Usually, the initial RAM file system is used for booting. You can specify it with --initrd=FILENAME. With --append=CMDLINE, you append options to the command line of the kernel to boot.

It is helpful to include the command line of the production kernel if these options are necessary for the kernel to boot. You can simply copy the command line with --append="$(cat /proc/cmdline)" or add more options with --append="$(cat /proc/cmdline) more_options".

You can always unload the previously loaded kernel. To unload a kernel that was loaded with the -l option, use the kexec -u command. To unload a crash kernel loaded with the -p option, use kexec -p -u command.

17.4 Calculating crashkernel Allocation Size

To use Kexec with a capture kernel and to use Kdump in any way, RAM needs to be allocated for the capture kernel. The allocation size depends on the expected hardware configuration of the computer, therefore you need to specify it.

The allocation size also depends on the hardware architecture of your computer. Make sure to follow the procedure intended for your system architecture.

Procedure 17.1: Allocation Size on AMD64/Intel 64
  1. To find out the base value for the computer, run the following in a terminal:

    root # kdumptool calibrate

    This command returns a list of values. All values are given in megabytes.

  2. Write down the values of Low and High.

    Note
    Note: Significance of Low and High Values

    On AMD64/Intel 64 computers, the High value stands for the memory reservation for all available memory. The Low value stands for the memory reservation in the DMA32 zone, that is, all the memory up to the 4 GB mark.

    If the computer has less than 4 GB of RAM, the High memory reservation is allocated and the Low memory reservation is ignored. If the computer has more than 4 GB of RAM, the Low memory reservation is allocated additionally.

  3. Adapt the High value from the previous step for the number of LUN kernel paths (paths to storage devices) attached to the computer. A sensible value in megabytes can be calculated using this formula:

    SIZE_HIGH = RECOMMENDATION + (LUNs / 2)

    The following parameters are used in this formula:

    • SIZE_HIGH.  The resulting value for High.

    • RECOMMENDATION.  The value recommended by kdumptool calibrate for High.

    • LUNs.  The maximum number of LUN kernel paths that you expect to ever create on the computer. Exclude multipath devices from this number, as these are ignored.

    Important
    Important: Adjust for Large Amounts of RAM

    For machines that have multiple terabytes (!) of RAM, such as many servers running SAP HANA, you need to additionally adjust the amount of both Kdump High and Low Memory.

    Experience suggests that in such cases, you might be successful using the following formulas:

    SIZE_HIGH = (RECOMMENDATION * RAM_IN_TB) + (LUNs / 2)
    SIZE_LOW = (RECOMMENDATION * RAM_IN_TB) + CUSTOM_DRIVER-RESERVATION_ADJUSTMENT
  4. If the drivers for your device make many reservations in the DMA32 zone, the Low value also needs to be adjusted. However, there is no simple formula to calculate these. Finding the right size can therefore be a process of trial and error.

    For the beginning, use the Low value recommended by kdumptool calibrate.

  5. The values now need to be set in the correct location.

    If you are changing the kernel command line directly

    Append the following kernel option to your boot loader configuration:

    crashkernel=SIZE_HIGH,high crashkernel=SIZE_LOW,low

    Replace the placeholders SIZE_HIGH and SIZE_LOW with the appropriate value from the previous steps and append the letter M (for megabytes).

    As an example, the following is valid:

    crashkernel=36M,high crashkernel=72M,low
    If you are using the YaST GUI:

    Set Kdump Low Memory to the determined Low value.

    Set Kdump High Memory to the determined High value.

    If you are using the YaST command line interface:

    Use the following command:

    root # yast kdump startup enable alloc_mem=LOW,HIGH

    Replace LOW with the determined Low value. Replace HIGH with the determined HIGH value.

Procedure 17.2: Allocation Size on POWER and z Systems
  1. To find out the basis value for the computer, run the following in a terminal:

    root # kdumptool calibrate

    This command returns a list of values. All values are given in megabytes.

  2. Write down the value of Low.

  3. Adapt the Low value from the previous step for the number of LUN kernel paths (paths to storage devices) attached to the computer. A sensible value in megabytes can be calculated using this formula:

    SIZE_LOW = RECOMMENDATION + (LUNs / 2)

    The following parameters are used in this formula:

    • SIZE_LOW.  The resulting value for Low.

    • RECOMMENDATION.  The value recommended by kdumptool calibrate for Low.

    • LUNs.  The maximum number of LUN kernel paths that you expect to ever create on the computer. Exclude multipath devices from this number, as these are ignored.

  4. The values now need to be set in the correct location.

    If you are working on the command line

    Append the following kernel option to your boot loader configuration:

    crashkernel=SIZE_LOW

    Replace the placeholderSIZE_LOW with the appropriate value from the previous step and append the letter M (for megabytes).

    As an example, the following is valid:

    crashkernel=108M
    If you are working in YaST

    Set Kdump Memory to the determined Low value.

Tip
Tip: Excluding Unused and Inactive CCW Devices on IBM z Systems

Depending on the number of available devices the calculated amount of memory specified by the crashkernel kernel parameter may not be sufficient. Instead of increasing the value, you may alternatively limit the amount of devices visible to the kernel. This will lower the required amount of memory for the "crashkernel" setting.

  1. To ignore devices you can run the cio_ignore tool to generate an appropriate stanza to ignore all devices, except the ones currently active or in use.

    tux > sudo cio_ignore -u -k
    cio_ignore=all,!da5d,!f500-f502

    When you run cio_ignore -u -k, the blacklist will become active and replace any existing blacklist immediately. Unused devices are not being purged, so they still appear in the channel subsystem. But adding new channel devices (via CP ATTACH under z/VM or dynamic I/O configuration change in LPAR) will treat them as blacklisted. To prevent this, preserve the original setting by running sudo cio_ignore -l first and reverting to that state after running cio_ignore -u -k. As an alternative, add the generated stanza to the regular kernel boot parameters.

  2. Now add the cio_ignore kernel parameter with the stanza from above to KDUMP_CMDLINE_APPEND in /etc/sysconfig/kdump, for example:

    KDUMP_COMMANDLINE_APPEND="cio_ignore=all,!da5d,!f500-f502"
  3. Activate the setting by restarting kdump:

    systemctl restart kdump.service

17.5 Basic Kexec Usage

To verify if your Kexec environment works properly, follow these steps:

  1. Make sure no users are currently logged in and no important services are running on the system.

  2. Log in as root.

  3. Switch to the rescue target with systemctl isolate rescue.target

  4. Load the new kernel to the address space of the production kernel with the following command:

    root # kexec -l /boot/vmlinuz --append="$(cat /proc/cmdline)" \
    --initrd=/boot/initrd
  5. Unmount all mounted file systems except the root file system with:

    umount -a
    Important
    Important: Unmounting the Root File System

    Unmounting all file systems will most likely produce a device is busy warning message. The root file system cannot be unmounted if the system is running. Ignore the warning.

  6. Remount the root file system in read-only mode:

    root # mount -o remount,ro /
  7. Initiate the reboot of the kernel that you loaded in Step 4 with:

    root # kexec -e

It is important to unmount the previously mounted disk volumes in read-write mode. The reboot system call acts immediately upon calling. Hard disk volumes mounted in read-write mode neither synchronize nor unmount automatically. The new kernel may find them dirty. Read-only disk volumes and virtual file systems do not need to be unmounted. Refer to /etc/mtab to determine which file systems you need to unmount.

The new kernel previously loaded to the address space of the older kernel rewrites it and takes control immediately. It displays the usual start-up messages. When the new kernel boots, it skips all hardware and firmware checks. Make sure no warning messages appear. All file systems are supposed to be clean if they had been unmounted.

17.6 How to Configure Kexec for Routine Reboots

Kexec is often used for frequent reboots. For example, if it takes a long time to run through the hardware detection routines or if the start-up is not reliable.

Note that firmware and the boot loader are not used when the system reboots with Kexec. Any changes you make to the boot loader configuration will be ignored until the computer performs a hard reboot.

17.7 Basic Kdump Configuration

You can use Kdump to save kernel dumps. If the kernel crashes, it is useful to copy the memory image of the crashed environment to the file system. You can then debug the dump file to find the cause of the kernel crash. This is called core dump.

Kdump works similarly to Kexec (see Chapter 17, Kexec and Kdump). The capture kernel is executed after the running production kernel crashes. The difference is that Kexec replaces the production kernel with the capture kernel. With Kdump, you still have access to the memory space of the crashed production kernel. You can save the memory snapshot of the crashed kernel in the environment of the Kdump kernel.

Tip
Tip: Dumps over Network

In environments with limited local storage, you need to set up kernel dumps over the network. Kdump supports configuring the specified network interface and bringing it up via initrd. Both LAN and VLAN interfaces are supported. Specify the network interface and the mode (DHCP or static) either with YaST, or using the KDUMP_NETCONFIG option in the /etc/sysconfig/kdump file.

Important
Important: Target File System for Kdump Must Be Mounted During Configuration

When configuring Kdump, you can specify a location to which the dumped images will be saved (default: /var/crash). This location must be mounted when configuring Kdump, otherwise the configuration will fail.

17.7.1 Manual Kdump Configuration

Kdump reads its configuration from the /etc/sysconfig/kdump file. To make sure that Kdump works on your system, its default configuration is sufficient. To use Kdump with the default settings, follow these steps:

  1. Determine the amount of memory needed for Kdump by following the instructions in Section 17.4, “Calculating crashkernel Allocation Size”. Make sure to set the kernel parameter crashkernel.

  2. Reboot the computer.

  3. Enable the Kdump service:

    root # systemctl enable kdump
  4. You can edit the options in /etc/sysconfig/kdump. Reading the comments will help you understand the meaning of individual options.

  5. Execute the init script once with sudo systemctl start kdump, or reboot the system.

After configuring Kdump with the default values, check if it works as expected. Make sure that no users are currently logged in and no important services are running on your system. Then follow these steps:

  1. Switch to the rescue target with systemctl isolate rescue.target

  2. Restart the Kdump service:

    root # systemctl start kdump
  3. Unmount all the disk file systems except the root file system with:

    root # umount -a
  4. Remount the root file system in read-only mode:

    root # mount -o remount,ro /
  5. Invoke a kernel panic with the procfs interface to Magic SysRq keys:

    root # echo c > /proc/sysrq-trigger
Important
Important: Size of Kernel Dumps

The KDUMP_KEEP_OLD_DUMPS option controls the number of preserved kernel dumps (default is 5). Without compression, the size of the dump can take up to the size of the physical RAM memory. Make sure you have sufficient space on the /var partition.

The capture kernel boots and the crashed kernel memory snapshot is saved to the file system. The save path is given by the KDUMP_SAVEDIR option and it defaults to /var/crash. If KDUMP_IMMEDIATE_REBOOT is set to yes , the system automatically reboots the production kernel. Log in and check that the dump has been created under /var/crash.

17.7.1.1 Static IP Configuration for Kdump

In case Kdump is configured to use a static IP configuration from a network device, you need to add the network configuration to the KDUMP_COMMANDLINE_APPEND variable in /etc/sysconfig/kdump.

Example 17.1: Kdump: Example Configuration Using a Static IP Setup

The following setup has been configured:

  • eth0 has been configured with the static IP address 192.168.1.1/24

  • eth1 has been configured with the static IP address 10.50.50.100/20

  • The Kdump configuration in /etc/sysconfig/kdump looks like:

    KDUMP_CPUS=1
    KDUMP_IMMEDIATE_REBOOT=yes
    KDUMP_SAVEDIR=ftp://anonymous@10.50.50.140/crashdump/
    KDUMP_KEEP_OLD_DUMPS=5
    KDUMP_FREE_DISK_SIZE=64
    KDUMP_VERBOSE=3
    KDUMP_DUMPLEVEL=31
    KDUMP_DUMPFORMAT=lzo
    KDUMP_CONTINUE_ON_ERROR=yes
    KDUMP_NETCONFIG=eth1:static
    KDUMP_NET_TIMEOUT=30

Using this configuration, Kdump fails to reach the network when trying to write the dump to the FTP server. To solve this issue, add the network configuration to KDUMP_COMMANDLINE_APPEND in /etc/sysconfig/kdump. The general pattern for this looks like the following:

KDUMP_COMMANDLINE_APPEND='ip=CLIENT IP:SERVER IP:GATEWAY IP:NETMASK:CLIENT HOSTNAME:DEVICE:PROTOCOL'

For the example configuration this would result in:

KDUMP_COMMANDLINE_APPEND='ip=10.50.50.100:10.50.50.140:10.60.48.1:255.255.240.0:dump-client:eth1:none'

17.7.2 YaST Configuration

To configure Kdump with YaST, you need to install the yast2-kdump package. Then either start the Kernel Kdump module in the System category of YaST Control Center, or enter yast2 kdump in the command line as root.

Screenshot of the YaST Kdump Module
Figure 17.1: YaST Kdump Module: Start-Up Page

In the Start-Up window, select Enable Kdump.

The values for Kdump Memory are automatically generated the first time you open the window. However, that does not mean that they are always sufficient. To set the right values, follow the instructions in Section 17.4, “Calculating crashkernel Allocation Size”.

Important
Important: After Hardware Changes, Set Kdump Memory Values Again

If you have set up Kdump on a computer and later decide to change the amount of RAM or hard disks available to it, YaST will continue to display and use outdated memory values.

To work around this, determine the necessary memory again, as described in Section 17.4, “Calculating crashkernel Allocation Size”. Then set it manually in YaST.

Click Dump Filtering in the left pane, and check what pages to include in the dump. You do not need to include the following memory content to be able to debug kernel problems:

  • Pages filled with zero

  • Cache pages

  • User data pages

  • Free pages

In the Dump Target window, select the type of the dump target and the URL where you want to save the dump. If you selected a network protocol, such as FTP or SSH, you need to enter relevant access information as well.

Tip
Tip: Sharing the Dump Directory with Other Applications

It is possible to specify a path for saving Kdump dumps where other applications also save their dumps. When cleaning its old dump files, Kdump will safely ignore other applications' dump files.

Fill the Email Notification window information if you want Kdump to inform you about its events via e-mail and confirm your changes with OK after fine tuning Kdump in the Expert Settings window. Kdump is now configured.

17.8 Analyzing the Crash Dump

After you obtain the dump, it is time to analyze it. There are several options.

The original tool to analyze the dumps is GDB. You can even use it in the latest environments, although it has several disadvantages and limitations:

  • GDB was not specifically designed to debug kernel dumps.

  • GDB does not support ELF64 binaries on 32-bit platforms.

  • GDB does not understand other formats than ELF dumps (it cannot debug compressed dumps).

That is why the crash utility was implemented. It analyzes crash dumps and debugs the running system as well. It provides functionality specific to debugging the Linux kernel and is much more suitable for advanced debugging.

If you want to debug the Linux kernel, you need to install its debugging information package in addition. Check if the package is installed on your system with:

tux > zypper se kernel | grep debug
Important
Important: Repository for Packages with Debugging Information

If you subscribed your system for online updates, you can find debuginfo packages in the *-Debuginfo-Updates online installation repository relevant for SUSE Linux Enterprise Desktop 12 SP3. Use YaST to enable the repository.

To open the captured dump in crash on the machine that produced the dump, use a command like this:

crash /boot/vmlinux-2.6.32.8-0.1-default.gz \
/var/crash/2010-04-23-11\:17/vmcore

The first parameter represents the kernel image. The second parameter is the dump file captured by Kdump. You can find this file under /var/crash by default.

Tip
Tip: Getting Basic Information from a Kernel Crash Dump

SUSE Linux Enterprise Desktop ships with the utility kdumpid (included in a package with the same name) for identifying unknown kernel dumps. It can be used to extract basic information such as architecture and kernel release. It supports lkcd, diskdump, Kdump files and ELF dumps. When called with the -v switch it tries to extract additional information such as machine type, kernel banner string and kernel configuration flavor.

17.8.1 Kernel Binary Formats

The Linux kernel comes in Executable and Linkable Format (ELF). This file is usually called vmlinux and is directly generated in the compilation process. Not all boot loaders support ELF binaries, especially on the AMD64/Intel 64 architecture. The following solutions exist on different architectures supported by SUSE® Linux Enterprise Desktop.

17.8.1.1 AMD64/Intel 64

Kernel packages for AMD64/Intel 64 from SUSE contain two kernel files: vmlinuz and vmlinux.gz.

  • vmlinuz This is the file executed by the boot loader.

    The Linux kernel consists of two parts: the kernel itself (vmlinux) and the setup code run by the boot loader. These two parts are linked together to create vmlinuz (note the distinction: z compared to x).

    In the kernel source tree, the file is called bzImage.

  • vmlinux.gz This is a compressed ELF image that can be used by crash and GDB. The ELF image is never used by the boot loader itself on AMD64/Intel 64. Therefore, only a compressed version is shipped.

17.8.1.2 POWER

The yaboot boot loader on POWER also supports loading ELF images, but not compressed ones. In the POWER kernel package, there is an ELF Linux kernel file vmlinux. Considering crash, this is the easiest architecture.

If you decide to analyze the dump on another machine, you must check both the architecture of the computer and the files necessary for debugging.

You can analyze the dump on another computer only if it runs a Linux system of the same architecture. To check the compatibility, use the command uname -i on both computers and compare the outputs.

If you are going to analyze the dump on another computer, you also need the appropriate files from the kernel and kernel debug packages.

  1. Put the kernel dump, the kernel image from /boot, and its associated debugging info file from /usr/lib/debug/boot into a single empty directory.

  2. Additionally, copy the kernel modules from /lib/modules/$(uname -r)/kernel/ and the associated debug info files from /usr/lib/debug/lib/modules/$(uname -r)/kernel/ into a subdirectory named modules.

  3. In the directory with the dump, the kernel image, its debug info file, and the modules subdirectory, start the crash utility:

    tux > crash VMLINUX-VERSION vmcore

Regardless of the computer on which you analyze the dump, the crash utility will produce output similar to this:

tux > crash /boot/vmlinux-2.6.32.8-0.1-default.gz \
/var/crash/2010-04-23-11\:17/vmcore

crash 4.0-7.6
Copyright (C) 2002, 2003, 2004, 2005, 2006, 2007, 2008  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.

GNU gdb 6.1
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

      KERNEL: /boot/vmlinux-2.6.32.8-0.1-default.gz
   DEBUGINFO: /usr/lib/debug/boot/vmlinux-2.6.32.8-0.1-default.debug
    DUMPFILE: /var/crash/2009-04-23-11:17/vmcore
        CPUS: 2
        DATE: Thu Apr 23 13:17:01 2010
      UPTIME: 00:10:41
LOAD AVERAGE: 0.01, 0.09, 0.09
       TASKS: 42
    NODENAME: eros
     RELEASE: 2.6.32.8-0.1-default
     VERSION: #1 SMP 2010-03-31 14:50:44 +0200
     MACHINE: x86_64  (2999 Mhz)
      MEMORY: 1 GB
       PANIC: "SysRq : Trigger a crashdump"
         PID: 9446
     COMMAND: "bash"
        TASK: ffff88003a57c3c0  [THREAD_INFO: ffff880037168000]
         CPU: 1
       STATE: TASK_RUNNING (SYSRQ)
crash> 

The command output prints first useful data: There were 42 tasks running at the moment of the kernel crash. The cause of the crash was a SysRq trigger invoked by the task with PID 9446. It was a Bash process because the echo that has been used is an internal command of the Bash shell.

The crash utility builds upon GDB and provides many additional commands. If you enter bt without any parameters, the backtrace of the task running at the moment of the crash is printed:

crash> bt
PID: 9446   TASK: ffff88003a57c3c0  CPU: 1   COMMAND: "bash"
 #0 [ffff880037169db0] crash_kexec at ffffffff80268fd6
 #1 [ffff880037169e80] __handle_sysrq at ffffffff803d50ed
 #2 [ffff880037169ec0] write_sysrq_trigger at ffffffff802f6fc5
 #3 [ffff880037169ed0] proc_reg_write at ffffffff802f068b
 #4 [ffff880037169f10] vfs_write at ffffffff802b1aba
 #5 [ffff880037169f40] sys_write at ffffffff802b1c1f
 #6 [ffff880037169f80] system_call_fastpath at ffffffff8020bfbb
    RIP: 00007fa958991f60  RSP: 00007fff61330390  RFLAGS: 00010246
    RAX: 0000000000000001  RBX: ffffffff8020bfbb  RCX: 0000000000000001
    RDX: 0000000000000002  RSI: 00007fa959284000  RDI: 0000000000000001
    RBP: 0000000000000002   R8: 00007fa9592516f0   R9: 00007fa958c209c0
    R10: 00007fa958c209c0  R11: 0000000000000246  R12: 00007fa958c1f780
    R13: 00007fa959284000  R14: 0000000000000002  R15: 00000000595569d0
    ORIG_RAX: 0000000000000001  CS: 0033  SS: 002b
crash> 

Now it is clear what happened: The internal echo command of Bash shell sent a character to /proc/sysrq-trigger. After the corresponding handler recognized this character, it invoked the crash_kexec() function. This function called panic() and Kdump saved a dump.

In addition to the basic GDB commands and the extended version of bt, the crash utility defines other commands related to the structure of the Linux kernel. These commands understand the internal data structures of the Linux kernel and present their contents in a human readable format. For example, you can list the tasks running at the moment of the crash with ps. With sym, you can list all the kernel symbols with the corresponding addresses, or inquire an individual symbol for its value. With files, you can display all the open file descriptors of a process. With kmem, you can display details about the kernel memory usage. With vm, you can inspect the virtual memory of a process, even at the level of individual page mappings. The list of useful commands is very long and many of these accept a wide range of options.

The commands that we mentioned reflect the functionality of the common Linux commands, such as ps and lsof. To find out the exact sequence of events with the debugger, you need to know how to use GDB and to have strong debugging skills. Both of these are out of the scope of this document. In addition, you need to understand the Linux kernel. Several useful reference information sources are given at the end of this document.

17.9 Advanced Kdump Configuration

The configuration for Kdump is stored in /etc/sysconfig/kdump. You can also use YaST to configure it. Kdump configuration options are available under System › Kernel Kdump in YaST Control Center. The following Kdump options may be useful for you.

You can change the directory for the kernel dumps with the KDUMP_SAVEDIR option. Keep in mind that the size of kernel dumps can be very large. Kdump will refuse to save the dump if the free disk space, subtracted by the estimated dump size, drops below the value specified by the KDUMP_FREE_DISK_SIZE option. Note that KDUMP_SAVEDIR understands the URL format PROTOCOL://SPECIFICATION, where PROTOCOL is one of file, ftp, sftp, nfs or cifs, and specification varies for each protocol. For example, to save kernel dump on an FTP server, use the following URL as a template: ftp://username:password@ftp.example.com:123/var/crash.

Kernel dumps are usually huge and contain many pages that are not necessary for analysis. With KDUMP_DUMPLEVEL option, you can omit such pages. The option understands numeric value between 0 and 31. If you specify 0, the dump size will be largest. If you specify 31, it will produce the smallest dump. For a complete table of possible values, see the manual page of kdump (man 7 kdump).

Sometimes it is very useful to make the size of the kernel dump smaller. For example, if you want to transfer the dump over the network, or if you need to save some disk space in the dump directory. This can be done with KDUMP_DUMPFORMAT set to compressed. The crash utility supports dynamic decompression of the compressed dumps.

Important
Important: Changes to the Kdump Configuration File

You always need to execute systemctl restart kdump after you make manual changes to /etc/sysconfig/kdump. Otherwise, these changes will take effect next time you reboot the system.

17.10 For More Information

There is no single comprehensive reference to Kexec and Kdump usage. However, there are helpful resources that deal with certain aspects:

For more details on crash dump analysis and debugging tools, use the following resources:

  • In addition to the info page of GDB (info gdb), there are printable guides at http://sourceware.org/gdb/documentation/ .

  • A white paper with a comprehensive description of the crash utility usage can be found at http://people.redhat.com/anderson/crash_whitepaper/.

  • The crash utility also features a comprehensive online help. Use help COMMAND to display the online help for command.

  • If you have the necessary Perl skills, you can use Alicia to make the debugging easier. This Perl-based front-end to the crash utility can be found at http://alicia.sourceforge.net/ .

  • If you prefer to use Python instead, you should install Pykdump. This package helps you control GDB through Python scripts and can be downloaded from http://sf.net/projects/pykdump .

  • A very comprehensive overview of the Linux kernel internals is given in Understanding the Linux Kernel by Daniel P. Bovet and Marco Cesati (ISBN 978-0-596-00565-8).

Part VII Synchronized Clocks with Precision Time Protocol

18 Precision Time Protocol

For network environments, it is vital to keep the computer and other devices' clocks synchronized and accurate. There are several solutions to achieve this, for example the widely used Network Time Protocol (NTP) described in Chapter 25, Time Synchronization with NTP.

18 Precision Time Protocol

  • Filename: tuning_ptp.xml
  • ID: cha.tuning.ptp

For network environments, it is vital to keep the computer and other devices' clocks synchronized and accurate. There are several solutions to achieve this, for example the widely used Network Time Protocol (NTP) described in Chapter 25, Time Synchronization with NTP.

The Precision Time Protocol (PTP) is a protocol capable of sub-microsecond accuracy, which is better than what NTP achieves. PTP support is divided between the kernel and user space. The kernel in SUSE Linux Enterprise Desktop includes support for PTP clocks, which are provided by network drivers.

18.1 Introduction to PTP

The clocks managed by PTP follow a master-slave hierarchy. The slaves are synchronized to their masters. The hierarchy is updated by the best master clock (BMC) algorithm, which runs on every clock. The clock with only one port can be either master or slave. Such a clock is called an ordinary clock (OC). A clock with multiple ports can be master on one port and slave on another. Such a clock is called a boundary clock (BC). The top-level master is called the grandmaster clock. The grandmaster clock can be synchronized with a Global Positioning System (GPS). This way disparate networks can be synchronized with a high degree of accuracy.

The hardware support is the main advantage of PTP. It is supported by various network switches and network interface controllers (NIC). While it is possible to use non-PTP enabled hardware within the network, having network components between all PTP clocks PTP hardware enabled achieves the best possible accuracy.

18.1.1 PTP Linux Implementation

On SUSE Linux Enterprise Desktop, the implementation of PTP is provided by the linuxptp package. Install it with zypper install linuxptp. It includes the ptp4l and phc2sys programs for clock synchronization. ptp4l implements the PTP boundary clock and ordinary clock. When hardware time stamping is enabled, ptp4l synchronizes the PTP hardware clock to the master clock. With software time stamping, it synchronizes the system clock to the master clock. phc2sys is needed only with hardware time stamping to synchronize the system clock to the PTP hardware clock on the network interface card (NIC).

18.2 Using PTP

18.2.1 Network Driver and Hardware Support

PTP requires that the used kernel network driver supports either software or hardware time stamping. Moreover, the NIC must support time stamping in the physical hardware. You can verify the driver and NIC time stamping capabilities with ethtool:

ethtool -T eth0
Time stamping parameters for eth0:
Capabilities:
hardware-transmit     (SOF_TIMESTAMPING_TX_HARDWARE)
        software-transmit     (SOF_TIMESTAMPING_TX_SOFTWARE)
        hardware-receive      (SOF_TIMESTAMPING_RX_HARDWARE)
        software-receive      (SOF_TIMESTAMPING_RX_SOFTWARE)
        software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
        hardware-raw-clock    (SOF_TIMESTAMPING_RAW_HARDWARE)
PTP Hardware Clock: 0
Hardware Transmit Timestamp Modes:
        off                   (HWTSTAMP_TX_OFF)
        on                    (HWTSTAMP_TX_ON)
Hardware Receive Filter Modes:
        none                  (HWTSTAMP_FILTER_NONE)
        all                   (HWTSTAMP_FILTER_ALL)

Software time stamping requires the following parameters:

SOF_TIMESTAMPING_SOFTWARE
SOF_TIMESTAMPING_TX_SOFTWARE
SOF_TIMESTAMPING_RX_SOFTWARE

Hardware time stamping requires the following parameters:

SOF_TIMESTAMPING_RAW_HARDWARE
SOF_TIMESTAMPING_TX_HARDWARE
SOF_TIMESTAMPING_RX_HARDWARE

18.2.2 Using ptp4l

ptp4l uses hardware time stamping by default. As root, you need to specify the network interface capable of hardware time stamping with the -i option. The -m tells ptp4l to print its output to the standard output instead of the system's logging facility:

ptp4l -m -i eth0
selected eth0 as PTP clock
port 1: INITIALIZING to LISTENING on INITIALIZE
port 0: INITIALIZING to LISTENING on INITIALIZE
port 1: new foreign master 00a152.fffe.0b334d-1
selected best master clock 00a152.fffe.0b334d
port 1: LISTENING to UNCALIBRATED on RS_SLAVE
master offset -25937 s0 freq +0 path delay       12340
master offset -27887 s0 freq +0 path delay       14232
master offset -38802 s0 freq +0 path delay       13847
master offset -36205 s1 freq +0 path delay       10623
master offset  -6975 s2 freq -30575 path delay   10286
port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
master offset  -4284 s2 freq -30135 path delay    9892

The master offset value represents the measured offset from the master (in nanoseconds).

The s0, s1, s2 indicators show the different states of the clock servo: s0 is unlocked, s1 is clock step, and s2 is locked. If the servo is in the locked state (s2), the clock will not be stepped (only slowly adjusted) if the pi_offset_const option is set to a negative value in the configuration file (see man 8 ptp4l for more information).

The freq value represents the frequency adjustment of the clock (in parts per billion, ppb).

The path delay value represents the estimated delay of the synchronization messages sent from the master (in nanoseconds).

Port 0 is a Unix domain socket used for local PTP management. Port 1 is the eth0 interface.

INITIALIZING, LISTENING, UNCALIBRATED and SLAVE are examples of port states which change on INITIALIZE, RS_SLAVE, and MASTER_CLOCK_SELECTED events. When the port state changes from UNCALIBRATED to SLAVE, the computer has successfully synchronized with a PTP master clock.

You can enable software time stamping with the -S option.

ptp4l -m -S -i eth3

You can also run ptp4l as a service:

systemctl start ptp4l

In this case, ptp4l reads its options from the /etc/sysconfig/ptp4l file. By default, this file tells ptp4l to read the configuration options from /etc/ptp4l.conf. For more information on ptp4l options and the configuration file settings, see man 8 ptp4l.

To enable the ptp4l service permanently, run the following:

systemctl enable ptp4l

To disable it, run

systemctl disable ptp4l

18.2.3 ptp4l Configuration File

ptp4l can read its configuration from an optional configuration file. As no configuration file is used by default, you need to specify it with -f.

ptp4l -f /etc/ptp4l.conf

The configuration file is divided into sections. The global section (indicated as [global]) sets the program options, clock options and default port options. Other sections are port specific, and they override the default port options. The name of the section is the name of the configured port—for example, [eth0]. An empty port section can be used to replace the command line option.

[global]
verbose               1
time_stamping         software
[eth0]

The example configuration file is an equivalent of the following command's options:

ptp4l -i eth0 -m -S

For a complete list of ptp4l configuration options, see man 8 ptp4l.

18.2.4 Delay Measurement

ptp4l measures time delay in two different ways: peer-to-peer (P2P) or end-to-end (E2E).

P2P

This method is specified with -P.

It reacts to changes in the network environment faster and is more accurate in measuring the delay. It is only used in networks where each port exchanges PTP messages with one other port. P2P needs to be supported by all hardware on the communication path.

E2E

This method is specified with -E. This is the default.

Automatic method selection

This method is specified with -A. The automatic option starts ptp4l in E2E mode, and changes to P2P mode if a peer delay request is received.

Important
Important: Common Measurement Method

All clocks on a single PTP communication path must use the same method to measure the time delay. A warning will be printed if either a peer delay request is received on a port using the E2E mechanism, or an E2E delay request is received on a port using the P2P mechanism.

18.2.5 PTP Management Client: pmc

You can use the pmc client to obtain more detailed information about ptp41. It reads from the standard input—or from the command line—actions specified by name and management ID. Then it sends the actions over the selected transport, and prints any received replies. There are three actions supported: GET retrieves the specified information, SET updates the specified information, and CMD (or COMMAND) initiates the specified event.

By default, the management commands are addressed to all ports. The TARGET command can be used to select a particular clock and port for the subsequent messages. For a complete list of management IDs, run pmc help.

pmc -u -b 0 'GET TIME_STATUS_NP'
sending: GET TIME_STATUS_NP
        90f2ca.fffe.20d7e9-0 seq 0 RESPONSE MANAGMENT TIME_STATUS_NP
                master_offset              283
                ingress_time               1361569379345936841
                cumulativeScaledRateOffset   +1.000000000
                scaledLastGmPhaseChange    0
                gmTimeBaseIndicator        0
                lastGmPhaseChange          0x0000'0000000000000000.0000
                gmPresent                  true
                gmIdentity                 00b058.feef.0b448a

The -b option specifies the boundary hops value in sent messages. Setting it to zero limits the boundary to the local ptp4l instance. Increasing the value will retrieve the messages also from PTP nodes that are further from the local instance. The returned information may include:

stepsRemoved

The number of communication nodes to the grandmaster clock.

offsetFromMaster, master_offset

The last measured offset of the clock from the master clock (nanoseconds).

meanPathDelay

The estimated delay of the synchronization messages sent from the master clock (nanoseconds).

gmPresent

If true, the PTP clock is synchronized to the master clock; the local clock is not the grandmaster clock.

gmIdentity

This is the grandmaster's identity.

For a complete list of pmc command line options, see man 8 pmc.

18.3 Synchronizing the Clocks with phc2sys

Use phc2sys to synchronize the system clock to the PTP hardware clock (PHC) on the network card. The system clock is considered a slave, while the network card a master. PHC itself is synchronized with ptp4l (see Section 18.2, “Using PTP”). Use -s to specify the master clock by device or network interface. Use -w to wait until ptp4l is in a synchronized state.

phc2sys -s eth0 -w

PTP operates in International Atomic Time (TAI), while the system clock uses Coordinated Universal Time (UTC). If you do not specify -w to wait for ptp4l synchronization, you can specify the offset in seconds between TAI and UTC with -O:

phc2sys -s eth0 -O -35

You can run phc2sys as a service as well:

systemctl start phc2sys

In this case, phc2sys reads its options from the /etc/sysconfig/phc2sys file. For more information on phc2sys options, see man 8 phc2sys.

To enable the phc2sys service permanently, run the following:

systemctl enable phc2sys

To disable it, run

systemctl dosable phc2sys

18.3.1 Verifying Time Synchronization

When PTP time synchronization is working properly and hardware time stamping is used, ptp4l and phc2sys output messages with time offsets and frequency adjustments periodically to the system log.

An example of the ptp4l output:

ptp4l[351.358]: selected /dev/ptp0 as PTP clock
ptp4l[352.361]: port 1: INITIALIZING to LISTENING on INITIALIZE
ptp4l[352.361]: port 0: INITIALIZING to LISTENING on INITIALIZE
ptp4l[353.210]: port 1: new foreign master 00a069.eefe.0b442d-1
ptp4l[357.214]: selected best master clock 00a069.eefe.0b662d
ptp4l[357.214]: port 1: LISTENING to UNCALIBRATED on RS_SLAVE
ptp4l[359.224]: master offset       3304 s0 freq      +0 path delay      9202
ptp4l[360.224]: master offset       3708 s1 freq  -28492 path delay      9202
ptp4l[361.224]: master offset      -3145 s2 freq  -32637 path delay      9202
ptp4l[361.224]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
ptp4l[362.223]: master offset       -145 s2 freq  -30580 path delay      9202
ptp4l[363.223]: master offset       1043 s2 freq  -28436 path delay      8972
[...]
ptp4l[371.235]: master offset        285 s2 freq  -28511 path delay      9199
ptp4l[372.235]: master offset        -78 s2 freq  -28788 path delay      9204

An example of the phc2sys output:

phc2sys[616.617]: Waiting for ptp4l...
phc2sys[628.628]: phc offset     66341 s0 freq      +0 delay   2729
phc2sys[629.628]: phc offset     64668 s1 freq  -37690 delay   2726
[...]
phc2sys[646.630]: phc offset      -333 s2 freq  -37426 delay   2747
phc2sys[646.630]: phc offset       194 s2 freq  -36999 delay   2749

ptp4l normally writes messages very frequently. You can reduce the frequency with the summary_interval directive. Its value is an exponent of the 2^N expression. For example, to reduce the output to every 1024 (which is equal to 2^10) seconds, add the following line to the /etc/ptp4l.conf file:

summary_interval 10

You can also reduce the frequency of the phc2sys command's updates with the -u SUMMARY-UPDATES option.

18.4 Examples of Configurations

This section includes several examples of ptp4l configuration. The examples are not full configuration files but rather a minimal list of changes to be made to the specific files. The string ethX stands for the actual network interface name in your setup.

Example 18.1: Slave clock using software time stamping

/etc/sysconfig/ptp4l:

OPTIONS=”-f /etc/ptp4l.conf -i ethX”

No changes made to the distribution /etc/ptp4l.conf.

Example 18.2: Slave clock using hardware time stamping

/etc/sysconfig/ptp4l:

OPTIONS=”-f /etc/ptp4l.conf -i ethX”

/etc/sysconfig/phc2sys:

OPTIONS=”-s ethX -w”

No changes made to the distribution /etc/ptp4l.conf.

Example 18.3: Master clock using hardware time stamping

/etc/sysconfig/ptp4l:

OPTIONS=”-f /etc/ptp4l.conf -i ethX”

/etc/sysconfig/phc2sys:

OPTIONS=”-s CLOCK_REALTIME -c ethX -w”

/etc/ptp4l.conf:

priority1 127
Example 18.4: Master clock using software time stamping (not generally recommended)

/etc/sysconfig/ptp4l:

OPTIONS=”-f /etc/ptp4l.conf -i ethX”

/etc/ptp4l.conf:

priority1 127

18.5 PTP and NTP

NTP and PTP time synchronization tools can coexist, synchronizing time from one to another in both directions.

18.5.1 NTP to PTP Synchronization

When ntpd is used to synchronize the local system clock, you can configure the ptp4l to be the grandmaster clock distributing the time from the local system clock via PTP. Include the priority1 option in /etc/ptp4l.conf:

[global]
priority1 127
[eth0]

Then run ptp4l:

ptp4l -f /etc/ptp4l.conf

When hardware time stamping is used, you need to synchronize the PTP hardware clock to the system clock with phc2sys:

phc2sys -c eth0 -s CLOCK_REALTIME -w

18.5.2 PTP to NTP Synchronization

You can configure ntpd to distribute the time from the system clock synchronized by ptp4l or phc2sys by using the local reference clock driver. Moreover, you need to stop ntpd from adjusting the system clock—do not specify any remote NTP servers in /etc/ntp.conf:

server   127.127.1.0
fudge    127.127.1.0 stratum 0
Note
Note: NTP and DHCP

When the DHCP client command dhclient receives a list of NTP servers, it adds them to NTP configuration by default. To prevent this behavior, set

NETCONFIG_NTP_POLICY=""

in the /etc/sysconfig/network/config file.

A Documentation Updates

  • Filename: tuning_docupdates.xml
  • ID: app.tuning.docupdates

This chapter lists content changes for this document.

This manual was updated on the following dates:

A.1 December 2017 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP3)

General

A.2 September 2017 (Initial Release of SUSE Linux Enterprise Desktop 12 SP3)

General

A.3 November 2016 (Initial Release of SUSE Linux Enterprise Desktop 12 SP2)

General
  • The e-mail address for documentation feedback has changed to doc-team@suse.com.

  • The documentation for Docker has been enhanced and renamed to Docker Guide.

Chapter 2, System Monitoring Utilities
Chapter 3, Analyzing and Managing System Log Files
  • Removed references to faillog, which is no longer shipped with SUSE Linux Enterprise Desktop.

Chapter 4, SystemTap—Filtering and Analyzing System Data
Bugfixes

A.4 March 2016 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP1)

A.5 December 2015 (Initial Release of SUSE Linux Enterprise Desktop 12 SP1)

General
  • SMT Guide is now part of the documentation for SUSE Linux Enterprise Desktop.

  • Add-ons provided by SUSE have been renamed as modules and extensions. The manuals have been updated to reflect this change.

  • Numerous small fixes and additions to the documentation, based on technical feedback.

  • The registration service has been changed from Novell Customer Center to SUSE Customer Center.

  • In YaST, you will now reach Network Settings via the System group. Network Devices is gone (https://bugzilla.suse.com/show_bug.cgi?id=867809).

Chapter 2, System Monitoring Utilities
Chapter 6, Hardware-Based Performance Monitoring with Perf
  • Added Perf chapter, including introductory information about Instruction-Based Sampling (IBS) (Fate #315868).

Chapter 18, Precision Time Protocol
  • Added PTP chapter (Fate #316795).

Bugfixes

A.6 February 2015 (Documentation Maintenance Update)

Bugfixes

A.7 October 2014 (Initial Release of SUSE Linux Enterprise Desktop 12)

General
  • Removed all KDE documentation and references because KDE is no longer shipped.

  • Removed all references to SuSEconfig, which is no longer supported (Fate #100011).

  • Move from System V init to systemd (Fate #310421). Updated affected parts of the documentation.

  • YaST Runlevel Editor has changed to Services Manager (Fate #312568). Updated affected parts of the documentation.

  • Removed all references to ISDN support, as ISDN support has been removed (Fate #314594).

  • Removed all references to the YaST DSL module as it is no longer shipped (Fate #316264).

  • Removed all references to the YaST Modem module as it is no longer shipped (Fate #316264).

  • Btrfs has become the default file system for the root partition (Fate #315901). Updated affected parts of the documentation.

  • The dmesg now provides human-readable time stamps in ctime()-like format (Fate #316056). Updated affected parts of the documentation.

  • syslog and syslog-ng have been replaced by rsyslog (Fate #316175). Updated affected parts of the documentation.

  • MariaDB is now shipped as the relational database instead of MySQL (Fate #313595). Updated affected parts of the documentation.

  • SUSE-related products are no longer available from http://download.novell.com but from http://download.suse.com. Adjusted links accordingly.

  • Novell Customer Center has been replaced with SUSE Customer Center. Updated affected parts of the documentation.

  • /var/run is mounted as tmpfs (Fate #303793). Updated affected parts of the documentation.

  • The following architectures are no longer supported: IA64 and x86. Updated affected parts of the documentation.

  • The traditional method for setting up the network with ifconfig has been replaced by wicked. Updated affected parts of the documentation.

  • A lot of networking commands are deprecated and have been replaced by newer commands (usually ip). Updated affected parts of the documentation.

    arp: ip neighbor
    ifconfig: ip addr, ip link
    iptunnel: ip tunnel
    iwconfig: iw
    nameif: ip link, ifrename
    netstat: ss, ip route, ip -s link, ip maddr
    route: ip route
  • Numerous small fixes and additions to the documentation, based on technical feedback.

Chapter 2, System Monitoring Utilities
Chapter 4, SystemTap—Filtering and Analyzing System Data

Added a link to the example scripts Web page to Section 4.1.1, “SystemTap Scripts”.

Chapter 7, OProfile—System-Wide Profiler

Corrected statements on the effects of sampling rates in Section 7.4.2, “Getting Event Configurations”.

Chapter 10, Automatic Non-Uniform Memory Access (NUMA) Balancing

New chapter.

Chapter 13, Tuning the Task Scheduler
Chapter 14, Tuning the Memory Management Subsystem

Added detailed descriptions on tunable parameters to Section 14.3.2, “Writeback Parameters”.

Chapter 17, Kexec and Kdump
Obsolete Content
  • Chapter Monitoring with Nagios has been removed from Part II, “System Monitoring” (Fate #316136), because Nagios is no longer shipped on SUSE Linux Enterprise 12.

  • Chapter Perf mon2—Hardware-Based Performance Monitoring has been removed from Part III, “Kernel Monitoring”, because perfmon2 is no longer shipped on SUSE Linux Enterprise 12.

Bugfixes
SUSE Linux Enterprise Desktop 12 SP3

Subscription Management Tool for SLES 12 SP3

Authors: Tomáš Bažant, Jakub Friedl, and Florian Nadge
Publication Date: May 07, 2018
About This Guide
Overview
Additional Documentation and Resources
Feedback
Documentation Conventions
1 SMT Installation
1.1 SMT Configuration Wizard
1.2 Upgrading from Previous Versions of SMT
1.3 Enabling SLP Announcements
2 SMT Server Configuration
2.1 Activating and Deactivating SMT with YaST
2.2 Setting the Update Server Credentials with YaST
2.3 Setting SMT Database Password with YaST
2.4 Setting E-mail Addresses to Receive Reports with YaST
2.5 Setting the SMT Job Schedule with YaST
3 Mirroring Repositories on the SMT Server
3.1 Mirroring Credentials
3.2 Managing Software Repositories with SMT Command Line Tools
3.3 The Structure of /srv/www/htdocs for SLE 11
3.4 The Structure of /srv/www/htdocs for SLE 12
3.5 Using the Test Environment
3.6 Testing and Filtering Update Repositories with Staging
3.7 Repository Preloading
4 Managing Repositories with YaST SMT Server Management
4.1 Starting SMT Management Module
4.2 Viewing and Managing Repositories
4.3 Staging Repositories
4.4 Jobs and Client Status Monitoring
5 Managing Client Machines with SMT
5.1 Listing Registered Clients
5.2 Deleting Registrations
5.3 Manual Registration of Clients at SUSE Customer Center
5.4 Scheduling Periodic Registrations of Clients at SUSE Customer Center
5.5 Compliance Monitoring
6 SMT Reports
6.1 Report Schedule and Recipients
6.2 Report Output Formats and Targets
7 SMT Tools and Configuration Files
7.1 Important Scripts and Tools
7.2 SMT Configuration Files
7.3 Server Certificates
8 Configuring Clients to Use SMT
8.1 Using Kernel Parameters to Access an SMT Server
8.2 Configuring Clients with AutoYaST Profile
8.3 Configuring Clients with the clientSetup4SMT.sh Script in SLE 11 and 12
8.4 Configuring Clients with YaST
8.5 Registering SLE11 Clients against SMT Test Environment
8.6 Registering SLE12 Clients against SMT Test Environment
8.7 Listing Accessible Repositories
8.8 Online Migration of SUSE Linux Enterprise Clients
8.9 How to Update Red Hat Enterprise Linux with SMT
9 Advanced Topics
9.1 Backup of the SMT Server
9.2 Disconnected SMT Servers
A SMT REST API
B Documentation Updates
B.1 September 2017 (Initial Release of SUSE Linux Enterprise Desktop 12 SP3)
B.2 April 2017 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP2)
B.3 November 2016 (Initial Release of SUSE Linux Enterprise Desktop 12 SP2)
B.4 March 2016 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP1)
B.5 December 2015 (Initial Release of SUSE Linux Enterprise Desktop 12 SP1)

Copyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

About This Guide

  • Filename: smt_overview.xml
  • ID: pre_smt

Subscription Management Tool (SMT) for SUSE Linux Enterprise 12 SP3 allows enterprise customers to optimize the management of SUSE Linux Enterprise software updates and subscription entitlements. It establishes a proxy system for SUSE® Customer Center with repository (formerly known as catalog) and registration targets. This helps you centrally manage software updates within the firewall on a per-system basis, while maintaining your corporate security policies and regulatory compliance.

SMT allows you to provision updates for all of your devices running a product based on SUSE Linux Enterprise. By downloading these updates once and distributing them throughout the enterprise, you can set more restrictive firewall policies. This also reduces bandwidth usage, as there is no need to download the same updates for each device. SMT is fully supported and available as a download for customers with an active SUSE Linux Enterprise product subscription.

Subscription Management Tool provides functionality that can be useful in many situations, including the following:

  • You want to update both SUSE Linux Enterprise and RedHat Enterprise Linux servers.

  • You want to get a detailed overview of your company's license compliance.

  • Not all machines in your environment can be connected to SUSE Customer Center to register and retrieve updates for bandwidth or security reasons.

  • There are SUSE Linux Enterprise hosts that are restricted and difficult to update without putting in place a custom update management solution.

  • You need to integrate additional software update external or internal repositories into your update solution.

  • You are looking for a turnkey box staging solution for testing updates before releasing them to the clients.

  • You want to have a quick overview of the patch status of your SUSE Linux Enterprise servers and desktops.

SMT
Figure 1: SMT

1 Overview

The Subscription Management Tool Guide is divided into the following chapters:

SMT Installation

Introduction to the SMT installation process and the SMT Configuration Wizard. You will learn how to install the SMT add-on on your base system during the installation process or on an already installed base system.

SMT Server Configuration

Description of the YaST configuration module SMT Server. This chapter explains how to set and configure organization credentials, SMT database passwords, and e-mail addresses to send SMT reports, or set the SMT job schedule, and activate or deactivate the SMT service.

Mirroring Repositories on the SMT Server

Explanation of how to mirror the installation and update sources with YaST.

Managing Repositories with YaST SMT Server Management

Description of how to register client machines on SUSE Customer Center. The client machines must be configured to use SMT.

SMT Reports

In-depth look at generated reports based on SMT data. Generated reports contain statistics of all registered machines and products used and of all active, expiring, or missing subscriptions.

SMT Tools and Configuration Files

Description of the most important scripts, configuration files and certificates supplied with SMT.

Configuring Clients to Use SMT

Introduction to configuring any client machine to register against SMT and download software updates from there instead of communicating directly with the SUSE Customer Center.

2 Additional Documentation and Resources

Chapters in this manual contain links to additional documentation resources that are available either on the system or on the Internet.

For an overview of the documentation available for your product and the latest documentation updates, refer to http://www.suse.com/documentation.

3 Feedback

  • Filename: common_intro_feedback_i.xml
  • ID: no ID found

Several feedback channels are available:

Bugs and Enhancement Requests

For services and support options available for your product, refer to http://www.suse.com/support/.

Help for openSUSE is provided by the community. Refer to https://en.opensuse.org/Portal:Support for more information.

To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click Create New.

User Comments

We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/documentation/feedback.html and enter your comments there.

Mail

For feedback on the documentation of this product, you can also send a mail to doc-team@suse.com. Make sure to include the document title, the product version and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).

4 Documentation Conventions

  • Filename: common_intro_typografie_i.xml
  • ID: no ID found

The following notices and typographical conventions are used in this documentation:

  • /etc/passwd: directory names and file names

  • PLACEHOLDER: replace PLACEHOLDER with the actual value

  • PATH: the environment variable PATH

  • ls, --help: commands, options, and parameters

  • user: users or groups

  • package name : name of a package

  • Alt, AltF1: a key to press or a key combination; keys are shown in uppercase as on a keyboard

  • File, File › Save As: menu items, buttons

  • Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.

  • Commands that must be run with root privileges. Often you can also prefix these commands with the sudo command to run them as non-privileged user.

    root # command
    tux > sudo command
  • Commands that can be run by non-privileged users.

    tux > command
  • Notices

    Warning
    Warning: Warning Notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important
    Important: Important Notice

    Important information you should be aware of before proceeding.

    Note
    Note: Note Notice

    Additional information, for example about differences in software versions.

    Tip
    Tip: Tip Notice

    Helpful information, like a guideline or a piece of practical advice.

1 SMT Installation

  • Filename: smt_install.xml
  • ID: smt.installation

SMT is included in SUSE Linux Enterprise Server starting with version 12 SP1. To install it, start SUSE Linux Enterprise Server installation, and click Software on the Installation Settings screen. Select the Subscription Management Tool pattern on the Software Selection and System Tasks screen, then click OK.

SMT Pattern
Figure 1.1: SMT Pattern
Tip
Tip: Installing SMT on an Existing System

To install SMT on the existing SUSE Linux Enterprise Server system, run YaST › Software › Software Management, select View › Patterns and select the SMT pattern there.

It is recommended to check for available SMT updates immediately after installing SUSE Linux Enterprise Server using the zypper patch command. SUSE continuously releases maintenance updates for SMT, and newer packages are likely to be available.

After the system is installed and updated, perform an initial SMT configuration using YaST › Network Services › SMT Configuration Wizard.

Note
Note: Install smt-client

The smt-client package needs to be installed on clients connected to the SMT server. The package requires no configuration, and it can be installed using the sudo zypper in smt-client command.

1.1 SMT Configuration Wizard

The two-step SMT Configuration Wizard helps you configure SMT after SUSE Linux Enterprise Server installation is finished. You can change the configuration later using the YaST SMT Server Configuration module—see Chapter 2, SMT Server Configuration.

  1. The Enable Subscription Management Tool service (SMT) option is enabled by default. Toggle it only if you want to disable the SMT product.

    If the firewall is enabled, enable Open Port in Firewall to allow access to the SMT service from remote computers.

    Enter your SUSE Customer Center organization credentials in User and Password. If you do not know your SUSE Customer Center credentials, refer to Section 3.1, “Mirroring Credentials”. Test the entered credentials using the Test button. SMT will connect to the Customer Center server using the provided credentials and download testing data.

    Enter the e-mail address you used for the SUSE Customer Center registration into SCC E-mail Used for Registration.

    Your SMT Server URL should contain the URL of the SMT server being configured. It is populated automatically.

    Click Next to continue to the second configuration step.

    SMT Wizard
    Figure 1.2: SMT Wizard
  2. For security reasons, SMT requires a separate user to connect to the database. In the Database Password for smt User screen, set the database password for this user.

    Enter all e-mail addresses for receiving SMT reports using the Add button. Use the Edit and Delete buttons to modify and delete the existing addresses. When you have done that, click Next.

  3. If the current database root password is empty, you will be prompted to specify it.

  4. By default, SMT is set to communicate with the client hosts via a secure protocol. For this, the server needs to have a server SSL certificate. The wizard displays a warning if the certificate does not exist. You can create a certificate using the Run CA Management button. Refer to Section 17.2, “YaST Modules for CA Management” for detailed information on managing certificates with YaST.

    Missing Server Certificate
    Figure 1.3: Missing Server Certificate

1.2 Upgrading from Previous Versions of SMT

This section provides information on upgrading SMT from the previous versions.

Important
Important: Upgrade from Versions Prior to 11 SP3

A direct upgrade path from SMT prior to version 11 SP3 is not supported. You need to do the following:

  1. Upgrade the operating system to SUSE Linux Enterprise Server 11 SP3 or SP4 as described in https://www.suse.com/documentation/sles11/book_sle_deployment/data/cha_update_sle.html

  2. At the same time upgrade SMT to version 11 SP3 as described in https://www.suse.com/documentation/smt11/book_yep/data/smt_installation_upgrade.html.

  3. Follow the steps described in Section 1.2.2, “Upgrade from SMT 11 SP3”.

1.2.1 Upgrade from SMT 12 SP1

Upgrade from SMT 12 SP1 is performed automatically during the SUSE Linux Enterprise Server upgrade and requires no additional manual steps. For more information on SUSE Linux Enterprise Server upgrade, see Chapter 16, Upgrading SUSE Linux Enterprise.

1.2.2 Upgrade from SMT 11 SP3

To upgrade SMT from version 11 SP3 to 12 SP2, follow the steps below.

  1. If you have not already done so, migrate from Novell Customer Center to SUSE Customer Center as described in Section 1.2.2.1, “Migration to SUSE Customer Center on SMT 11 SP3”.

  2. Back up and migrate the database. See the general procedure in Section 16.3.4, “Migrate your MySQL Database”.

  3. Upgrade to SUSE Linux Enterprise Server 12 SP2 as described in Chapter 16, Upgrading SUSE Linux Enterprise.

  4. Look if the new /etc/my.cnf.rpmnew exists and update it with any custom changes you need. Then copy it over the existing /etc/my.cnf:

    cp /etc/my.cnf.rpmnew /etc/my.cnf
  5. Enable the smt target to start at the system boot:

    systemctl enable smt.target

    Start it immediately, if necessary:

    systemctl start smt.target

1.2.2.1 Migration to SUSE Customer Center on SMT 11 SP3

Before upgrading to SUSE Linux Enterprise Server 12, you need to switch the registration center on SUSE Linux Enterprise Server 11. SMT now registers with SUSE Customer Center instead of Novell Customer Center. You can do this either via a YaST module or command line tools.

Before performing the switch between customer centers, make sure that the target customer center serves all products that are registered with SMT. Both YaST and the command line tools perform a check to find out whether all products can be served with the new registration server.

To perform the migration to SUSE Customer Center via command line, use the following command:

smt ncc-scc-migration

The migration itself takes time, and during the migration process the SMT server may not be able to serve clients that are already registered.

The migration process itself changes the registration server and the proper type of API in the configuration files. No further (configuration) changes are needed on the SMT.

To migrate from Novell Customer Center to SUSE Customer Center via YaST, use the YaST smt-server module.

When migration has been completed, it is necessary to synchronize SMT with the customer center. It is recommended to ensure that the repositories are up to date. This can be done using the following commands:

   smt sync
   smt mirror

1.3 Enabling SLP Announcements

SMT includes the SLP service description file (/etc/slp.reg.d/smt.reg). To enable SLP announcements of the SMT service, open respective ports in your firewall and enable the SLP service:

sysconf_addword /etc/sysconfig/SuSEfirewall2 FW_SERVICES_EXT_TCP "427"
sysconf_addword /etc/sysconfig/SuSEfirewall2 FW_SERVICES_EXT_UDP "427"
insserv slpd
rcslpd start

2 SMT Server Configuration

  • Filename: smt_server.xml
  • ID: smt.server

This chapter introduces the YaST configuration module for the SMT server. This module can be used to set and configure mirroring credentials, SMT database passwords, and e-mail addresses for receiving SMT reports. The module also lets you set the SMT job schedule, and activate or deactivate the SMT service.

To configure SMT with SMT Server Configuration, follow the steps below.

  1. Start the YaST module SMT Server Configuration from the YaST control center or by running yast smt-server from the command line.

  2. To activate SMT, toggle the Enable Subscription Management Tool Service (SMT) option in the Customer Center Access section. For more information about activating SMT with YaST, see Section 2.1, “Activating and Deactivating SMT with YaST”.

  3. If the firewall is enabled, activate Open Port in Firewall.

  4. In the Customer Center Configuration section of Customer Center Access, you can set the custom server URLs. Set and test credentials for the SUSE Update service. Correct credentials are necessary to enable mirroring from the download server and determine the products that should be mirrored. Also set the e-mail address used for the registration and the URL of your SMT server. For more information, see Section 2.2, “Setting the Update Server Credentials with YaST”.

  5. In the Database and Reporting section, set the password for the SMT user in the Maria DB database and specify e-mail addresses for receiving reports. For more information, see Section 2.3, “Setting SMT Database Password with YaST” and Section 2.4, “Setting E-mail Addresses to Receive Reports with YaST”.

  6. In the Scheduled SMT Jobs section, set a schedule for SMT jobs, such as synchronization of updates, SUSE Customer Center registration, and SMT report generation. For more information, see Section 2.5, “Setting the SMT Job Schedule with YaST”.

  7. When you are satisfied with the configuration, click OK. YaST updates the SMT configuration and starts or restarts necessary services.

    If you want to abort the configuration and cancel any changes, click Cancel.

    Note
    Note: Check for Certificate

    When the SMT Configuration applies changes, it checks whether the common server certificate exists. If the certificate does not exist, you will be asked whether the certificate should be created.

2.1 Activating and Deactivating SMT with YaST

YaST provides an easy way to activate or deactivate the SMT service. To activate SMT using YaST, follow the steps below.

  1. Switch to the Customer Center Access section in the SMT Configuration.

  2. Activate the Enable Subscription Management Tool service (SMT) option.

    Note
    Note: Organization Credentials

    Specify organization credentials before activating SMT. For more information on how to set organization credentials with YaST, see Section 2.2, “Setting the Update Server Credentials with YaST”.

  3. Click Finish to apply the changes and leave the SMT Configuration.

To deactivate SMT with YaST, proceed as follows.

  1. Switch to the Customer Center Access section in the SMT Configuration.

  2. Disable the Enable Subscription Management Tool service (SMT) option.

  3. Click Finish to apply the changes and leave the SMT Configuration.

When activating SMT, YaST performs the following actions.

  • The Apache configuration is changed by creating symbolic links in the /etc/apache2/conf.d/ directory. Links to the /etc/smt.d/nu_server.conf and /etc/smt.d/smt_mod_perl.conf files are created there.

  • The Apache Web server is started (or reloaded if already running).

  • The Maria DB server is started or restarted. The smt user and all necessary tables in the database are created, if needed.

  • The schema of the SMT database is checked. If the database schema is outdated, the SMT database is upgraded to the current schema.

  • Cron is updated by creating a symbolic link in the /etc/cron.d/ directory. A link to the /etc/smt.d/novell.com-smt file is created there.

When deactivating SMT, YaST performs the following actions.

  • Symbolic links that were created upon SMT activation in the /etc/apache2/conf.d/ and /etc/cron.d/ directories are deleted.

  • The Cron daemon, the Apache server, and the Maria DB database daemon are restarted. Neither Apache nor Maria DB are stopped, as they may be used for other purposes than the SMT service.

2.2 Setting the Update Server Credentials with YaST

The following procedure describes how to set and test the download server credentials and the URL of the download server service using YaST.

Setting the Update Server Credentials with YaST
Figure 2.1: Setting the Update Server Credentials with YaST
  1. Switch to the Customer Center Access section in the SMT Configuration. If the credentials have been already set with YaST or via the /etc/smt.conf configuration file, they will be displayed in the User and Password fields.

  2. If you do not have credentials, visit SUSE Customer Center to obtain them. For more details, see Section 3.1, “Mirroring Credentials”.

  3. Enter your user name and password in the appropriate fields.

  4. Click Test to check the credentials. YaST will try to download a list of available repositories with the provided credentials. If the test succeeds, the last line of the test results will read Test result: success. If the test fails, check the provided credentials and try again.

    Successful Test of the Update Server Credentials
    Figure 2.2: Successful Test of the Update Server Credentials
  5. Enter the SCC E-mail Used for Registration. This should be the address you used to register to SUSE Customer Center.

    Enter Your SMT Server URL if it has not been detected automatically.

  6. Click OK.

2.3 Setting SMT Database Password with YaST

For security reasons, SMT uses its own user in the database. YaST provides an interface for setting up or changing the SMT database password. To set or change the SMT database password with YaST, follow the steps below.

  1. Switch to the Database and Reporting section in the SMT Configuration module.

  2. Enter the Database Password for SMT User. Confirm the password by re-entering it, then click OK.

2.4 Setting E-mail Addresses to Receive Reports with YaST

YaST SMT provides an interface for setting up a list of e-mail addresses for receiving reports from SMT. To edit this list of addresses, proceed as follows.

  1. Switch to the Database and Reporting section in the SMT Configuration.

  2. The list of e-mail addresses is shown in the table. Use the appropriate buttons to add, edit, and delete existing address entries.

  3. Click OK.

The comma-separated list of addresses for SMT reports is written to the reportEmail section of the /etc/smt.conf configuration file.

2.5 Setting the SMT Job Schedule with YaST

The SMT Configuration module provides an interface to schedule recurring SMT jobs. YaST uses cron to schedule configured jobs. If needed, cron can be used directly. There are five types of recurring jobs that can be set:

Synchronization of Updates

Synchronizes with SUSE Customer Center, updates repositories, and downloads new updates.

Generation of Reports

Generates and sends SMT Subscription Reports to addresses defined in Section 2.4, “Setting E-mail Addresses to Receive Reports with YaST”.

SCC Registration

Registers with SUSE Customer Center all clients that are not already registered or that changed their data since the last registration.

Job Queue Cleanup

Cleans up queued jobs. It removes finished or failed jobs from the job queue that are older than eight days. It also removes job artifacts that are left in the database as result of an error.

SMT Job Schedule Configuration
Figure 2.3: SMT Job Schedule Configuration

Use the following procedure to configure the schedule of SMT jobs with YaST.

  1. Switch to the Scheduled SMT Jobs section in the SMT Configuration. The table contains a list of all scheduled jobs, their type, frequency, date, and time to run. You can add, delete, and edit the existing scheduled tasks.

  2. To add a scheduled SMT job, click Add. This opens the Adding New SMT Scheduled Job dialog.

    Choose the synchronization job to schedule. You can choose between Synchronization of Updates, Report Generation, SCC Registration, and Job Queue Cleanup.

    Choose the Frequency of the new scheduled SMT job. Jobs can be performed Daily, Weekly, Monthly, or Periodically (every n-th hour or every m-th minute).

    Set the Job Start Time by entering Hour and Minute. In case of a recurring job, enter the relevant intervals. For weekly and monthly schedules, select Day of the Week or Day of the Month.

    Click Add.

  3. To edit a scheduled SMT job (for example, change its frequency, time, or date), select the job in the table and click Edit. Then change the desired parameters and click OK.

    Setting Scheduled Job with YaST
    Figure 2.4: Setting Scheduled Job with YaST
  4. To cancel a scheduled job and delete it from the table, select the job in the table and click Delete.

  5. Click OK to apply the settings and quit the SMT Configuration.

3 Mirroring Repositories on the SMT Server

  • Filename: smt_mirroring.xml
  • ID: smt.mirroring

You can mirror the installation and update repositories on the SMT server. This way, you do not need to download updates on each machine, which saves time and bandwidth.

Important
Important: SUSE Linux Enterprise Server 9 Repositories

As SUSE Linux Enterprise Server 9 is no longer supported, SMT does not mirror SUSE Linux Enterprise Server 9 repositories.

3.1 Mirroring Credentials

Before you create a local mirror of the repositories, you need appropriate organization credentials. You can obtain the credentials from SUSE Customer Center.

To get the credentials from SUSE Customer Center, follow these steps:

  1. Visit SUSE Customer Center at http://scc.suse.com and log in.

  2. If you are member of multiple organizations, chose the organization you want to work with from the drop-down box in the top-right corner.

  3. Click Organization in the top menu.

  4. Switch to the Organizational credentials section.

  5. To see the password, click Show password.

The obtained credentials should be set with the YaST SMT Server Configuration module or added directly to the /etc/smt.conf file. For more information about the /etc/smt.conf file, see Section 7.2.1, “/etc/smt.conf”.

Tip
Tip: Merging Multiple Organization Site Credentials

SMT can only work with one mirror credential at a time. Multiple credentials are not supported. When a customer creates a new company, this generates a new mirror credential. This is not always convenient, as some products are available via the first set and other products via the second set. To request a merge of credentials, the EMEA-based customers (Europe, the Middle East and Africa) are advised to send an e-mail to <> with the applicable customer and site IDs. The EMEA PIC team will verify the records. The contact for NALAAP (North America, Latin America, and Asia Pacific) is <>.

3.2 Managing Software Repositories with SMT Command Line Tools

This section describes tools and procedures for viewing information about software repositories available through SMT, configuring these repositories, and setting up custom repositories on the command line. For details on the YaST SMT Server Management module, see Chapter 4, Managing Repositories with YaST SMT Server Management.

3.2.1 Updating the Local SMT Database

The local SMT database needs to be updated periodically with the information downloaded from SUSE Customer Center. These periodic updates can be configured with the SMT Management module, as described in Section 2.5, “Setting the SMT Job Schedule with YaST”.

To update the SMT database manually, use the smt-sync command. For more information about the smt-sync command, see Section 7.1.2.7, “smt-sync”.

3.2.2 Enabled Repositories and Repositories That Can Be Mirrored

The database installed with SMT contains information about all software repositories available on SUSE Customer Center. However, the used mirror credentials determine which repositories can really be mirrored. For more information about getting and setting organization credentials, see Section 3.1, “Mirroring Credentials”.

Repositories that can be mirrored have the MIRRORABLE flag set in the repositories table in the SMT database. That a repository can be mirrored does not mean that it needs to be mirrored. Only repositories with the DOMIRROR flag set in the SMT database will be mirrored. For more information about configuring which repositories should be mirrored, see Section 3.2.4, “Selecting Repositories to Be Mirrored”.

3.2.3 Getting Information about Repositories

Use the smt-repos command to list available software repositories and additional information. Using this command without any options lists all available repositories, including repositories that cannot be mirrored. In the first column, the enabled repositories (repositories set to be mirrored) are marked with Yes. Disabled repositories are marked with No. The other columns show ID, type, name, target, and description of the listed repositories. The last columns show whether the repository can be mirrored and whether staging is enabled.

Use the --verbose option, to get additional information about the URL of the repository and the path it will be mirrored to.

The repository listing can be limited to the repositories that can be mirrored or to the repositories that are enabled. To list the repositories that can be mirrored, use the -m or --only-mirrorable option: smt-repos -m.

To list only enabled repositories, use the -o or --only-enabled option: smt-repos -o (see Example 3.1, “Listing All Enabled Repositories”).

Example 3.1: Listing All Enabled Repositories
tux:~ # smt-repos -o
.---------------------------------------------------------------------------------------------------------------------.
| Mirr| ID | Type | Name                    | Target        | Description                             | Can be M| Stag|
+-----+----+------+-------------------------+---------------+-----------------------------------------+---------+-----+
| Yes |  1 | zypp | ATI-Driver-SLE11-SP2    | --            | ATI-Driver-SLE11-SP2                    | Yes     | Yes |
| Yes |  2 | zypp | nVidia-Driver-SLE11-SP2 | --            | nVidia-Driver-SLE11-SP2                 | Yes     | No  |
| Yes |  3 | nu   | SLED11-SP2-Updates      | sle-11-x86_64 | SLED11-SP2-Updates for sle-11-x86_64    | Yes     | No  |
| Yes |  4 | nu   | SLES11-SP1-Updates      | sle-11-x86_64 | SLES11-SP1-Updates for sle-11-x86_64    | Yes     | Yes |
| Yes |  5 | nu   | SLES11-SP2-Core         | sle-11-x86_64 | SLES11-SP2-Core for sle-11-x86_64       | Yes     | No  |
| Yes |  6 | nu   | SLES11-SP2-Updates      | sle-11-i586   | SLES11-SP2-Updates for sle-11-i586      | Yes     | No  |
| Yes |  7 | nu   | WebYaST-Testing-Updates | sle-11-i586   | WebYaST-Testing-Updates for sle-11-i586 | Yes     | No  |
'-----+----+------+-------------------------+---------------+-----------------------------------------+---------+-----'

You can also list only repositories with a specific name or show information about a repository with a specific name and target. To list repositories with a particular name, use the smt-repos REPOSITORY_NAME command. To show information about a repository with a specific name and target, use the smt-repos repository_name TARGET command.

To get a list of installation repositories from remote, see Section 8.7, “Listing Accessible Repositories”.

3.2.4 Selecting Repositories to Be Mirrored

Only enabled repositories can be mirrored. In the database, the enabled repositories have the DOMIRROR flag set. Repositories can be enabled or disabled using the smt-repos command.

To enable one or more repositories, follow these steps:

  1. To enable all repositories that can be mirrored or to choose one repository from the list of all repositories, run the smt-repos -e command.

    You can limit the list of repositories by using the relevant options. To limit the list to the repositories that can be mirrored, use the -m option: smt-repos -m -e. To limit the list to the repositories with a specific name, use the smt-repos -e REPOSITORY_NAME command. To list a repository with a specific name and target, use the smt-repos -e REPOSITORY_NAME TARGET command.

    To enable all repositories belonging to a specific product, use the --enable-by-prod or -p option, followed by the name of the product and optionally version, architecture, and release:

    smt-repos -p product[,version[,architecture[,release]]]

    For example, to enable all repositories belonging to SUSE Linux Enterprise Server 10 SP3 for PowerPC architecture, use the smt-repos -p SUSE-Linux-Enterprise-Server-SP3,10,ppc command. The list of known products can be obtained with the smt-list-products command.

    Tip
    Tip: Installer Self-Update Repository

    SMT supports mirroring the installer self-update repository (find more information in Section 3.4.1, “Self-Update Process”). If you need to provide the self-update repository, identify and enable it, for example:

    $ smt-repos -m | grep Installer
    $ smt-repos -e SLES12-SP2-Installer-Updates sle-12-x86_64
  2. If more than one repository is listed, choose the one you want to enable: specify its ID listed in the repository table and press Enter. If you want to enable all the listed repositories, use a and press Enter.

To disable one or more repositories, follow these steps:

  1. To disable all enabled repositories or just choose one repository from the list of all repositories, run the smt-repos -d command.

    To choose the repository to be disabled from a shorter list, or to disable all repositories from a limited group, use any of the available options to limit the list of repositories. To limit the list to the enabled repositories, use the -o option: smt-repos -o -d. To limit the list to repositories with a particular name, use the smt-repos -d REPOSITORY_NAME command. To show a repository with a specific name and target, use the smt-repos -d REPOSITORY_NAME TARGET command.

  2. If more than one repository is listed, choose which one you want to disable: specify its ID listed in the repository table and press Enter. If you want to disable all the listed repositories, use a and press Enter.

3.2.5 Deleting Mirrored Repositories

You can delete mirrored repositories that are no longer used. If you delete a repository, it will be physically removed from the SMT storage area.

Use the smt-repos --delete command to delete a repository with a specific name. To delete the repository in a namespace, specify the --namespace DIRNAME option.

The --delete option lists all repositories. You can delete the specified repositories by entering the ID number or the name and target. To delete all repositories, enter a.

Note
Note: Detecting Repository IDs

Every repository has an SHA-1 hash that you can use as an ID. You can get the repository's hash by calling smt-repos -v.

3.2.6 Mirroring Custom Repositories

SMT also makes it possible to mirror repositories that are not available at the SUSE Customer Center. These repositories are called custom repositories, and they can be mirrored using the smt-setup-custom-repos command. It is also possible to delete custom repositories.

When adding a new custom repository, the smt-setup-custom-repos command inserts a new record in the database and sets the mirror flag to true. You can disable mirroring later, if necessary.

To set up a custom repository to be available through SMT, follow these steps:

  1. If you do not know the ID of the product the new repositories should belong to, use smt-list-products to get the ID. For the description of the smt-list-products, see Section 7.1.2.4, “smt-list-products”.

  2. Run

    smt-setup-custom-repos --productid PRODUCT_ID \
    --name REPOSITORY_NAME --exturl REPOSITORY_URL

    PRODUCT_ID is the ID of the product the repository belongs to, REPOSITORY_NAME is the name of the repository, and REPOSITORY_URL is the URL of the repository. If the added repository needs to be available for more than one product, specify the IDs of all products that should use the added repository.

    For example, the following command sets My repository available at http://example.com/My_repository to the products with the IDs 423, 424, and 425:

    smt-setup-custom-repositories --productid 423 --productid 424 \
    --productid 425 --name 'My_repository' \
    --exturl 'http://example.com/My_repository'
Note
Note: Mirroring Unsigned Repositories

By default, SUSE Linux Enterprise 10 does not allow the use of unsigned repositories. So if you want to mirror unsigned repositories and use them on client machines, be aware that the package installation tool—YaST or zypper—will ask you whether to use repositories that are not signed.

To remove an existing custom repository from the SMT database, use smt-setup-custom-repositories --delete ID, where ID is the ID of the repository to be removed.

3.3 The Structure of /srv/www/htdocs for SLE 11

The path to the directory containing the mirror is set by the MirrorTo option in the /etc/smt.conf configuration file. For more information about /etc/smt.conf, see Section 7.2.1, “/etc/smt.conf”. If the MirrorTo option is not set to the Apache htdocs directory /srv/www/htdocs/, the following links need to be created. If the directories already exist, they need to be removed prior to creating the link (the data in these directories will be lost). In the following examples, MIRRORTO needs to be replaced by the path the option MirrorTo is set to.

  • /srv/www/htdocs/repo/$RCE must point to MIRRORTO/repo/$RCE/

  • /srv/www/htdocs/repo/RPMMD must point to MIRRORTO/repo/RPMMD/

  • /srv/www/htdocs/repo/testing must point to MIRRORTO/repo/testing/

  • /srv/www/htdocs/repo/full must point to MIRRORTO/repo/full/

The directory specified using the MirrorTo option and the subdirectories listed above must exist. Files, directories, and links in /MIRRORTO must belong to the smt user and the www group.

Here is an example where the MirrorTo is set to /mirror/data:

l /srv/www/htdocs/repo/
total 16
lrwxrwxrwx 1 smt  www    22 Feb  9 14:23 $RCE -> /mirror/data/repo/$RCE/
drwxr-xr-x 4 smt  www  4096 Feb  9 14:23 ./
drwxr-xr-x 4 root root 4096 Feb  8 15:44 ../
lrwxrwxrwx 1 smt  www    23 Feb  9 14:23 RPMMD -> /mirror/data/repo/RPMMD/
lrwxrwxrwx 1 smt  www    22 Feb  9 14:23 full -> /mirror/data/repo/full/
drwxr-xr-x 2 smt  www  4096 Feb  8 11:12 keys/
lrwxrwxrwx 1 smt  www    25 Feb  9 14:23 testing -> /mirror/data/repo/testing/
drwxr-xr-x 2 smt  www  4096 Feb  8 14:14 tools/

The links can be created using the ln -s commands. For example:

cd /srv/www/htdocs/repo
for LINK in \$RCE RPMMD full testing; do
 ln -s /mirror/data/repo/${LINK}/ && chown -h smt.www ${LINK}
done
Important
Important: The /srv/www/htdocs/repo Directory

The /srv/www/htdocs/repo directory must not be a symbolic link.

Important
Important: Apache and Symbolic Links

By default Apache on SUSE Linux Enterprise Desktop is configured to not follow symbolic links. To enable symblic links for /srv/www/htdocs/repo/ add the following snippet to /etc/apache2/default-server.conf (or the respetive virtual host configurtion in case you are running SMT on a virtual host):

<Directory "/srv/www/htdocs/repo">
 Options FollowSymLinks
</Directory>

After having made the change, test the syntax and reload the Apache configuration files to activate the change:

rcapache2 configtest && rcapache2 reload

3.4 The Structure of /srv/www/htdocs for SLE 12

The repository structure in the /srv/www/htdocs directory matches the structure specified in SUSE Customer Center. There are the following directories in the structure (selected examples, similar for other products and architectures):

repo/SUSE/Products/SLE-SDK/12/x86_64/product/

Contains the -POOL repository of SDK (the GA version of all packages).

repo/SUSE/Products/SLE-SDK/12/x86_64/product.license/

Contains EULA associated with the product.

repo/SUSE/Updates/SLE-SDK/12/x86_64/update/
repo/SUSE/Updates/SLE-SDK/12/s390x/update/
repo/SUSE/Updates/SLE-SERVER/12/x86_64/update/

Contain update repositories for respective products.

repo/full/SUSE/Updates/SLE-SERVER/12/x86_64/update/
repo/testing/SUSE/Updates/SLE-SERVER/12/x86_64/update/

Contain repositories created for staging of respective repositories.

3.5 Using the Test Environment

You can mirror repositories to a test environment instead of the production environment. The test environment can be used with a limited number of client machines before the tested repositories are moved to the production environment. The test environment can be run on the main SMT server.

The testing environment uses the same structure as the production environment, but it is located in the /srv/www/htdocs/repos/testing/ subdirectory.

To mirror a repository to the testing environment, you can use the Staging tab in the YaST SMT Management module, or the command smt-staging.

To register a client in the testing environment, modify /etc/SUSEConnect on the client machine as follows:

namespace: testing

To move the testing environment to the production environment, manually copy or move it using the cp -a or mv command.

You can enable staging for a repository in the Repositories tab of the SMT Management module or with the smt-repos command. The mirroring happens automatically to repo/full/.

If you have an SLE11-based update repository with patches, SMT tools can be used to manage them. Using these tools, you can select patches and create a snapshot and copy it into repo/testing/. After tests are finished, you can copy the contents of repo/testing into the production area /repo.

SLE10-based update repositories are not supported by SMT tools. Not all of these repositories support selective staging. In this case, you must mirror the complete package.

Recommended workflow:

customer center => repo/full => repo/testing, => repo/production

3.6 Testing and Filtering Update Repositories with Staging

You can test repositories on any clients using the smt-staging command before moving them to the production environment. You can select new update repositories to be installed on clients.

You can either use the smt-staging command or the YaST SMT Management module for staging. For more details, see Section 4.3, “Staging Repositories”.

SMT Staging Schema
Figure 3.1: SMT Staging Schema

Repositories with staging enabled are mirrored to the /MIRRORTO/repo/full subdirectory. This subdirectory is usually not used by your clients. Incoming new updates are not automatically visible to the clients before you get a chance to test them. Later, you can generate a testing environment of this repository, which goes to the /MIRRORTO/repo directory.

If you have an SLE 11-based update repository with patches, you can use SMT tools to manage them. Using these tools, you can select patches and create a snapshot and put it into repo/testing/. After tests are finished, you can put the content of repo/testing into the /repo production area called the default staging group. You can create additional staging groups as needed using the smt-staging creategroup command.

Note
Note: SLE 10-based Update Repositories

SLE 10-based update repositories are not supported by SMT tools. Not all of these repositories support selective staging. In this case, you need to mirror the complete package.

Enabling Staging

To enable or disable staging, use the smt-repos command with the --enable-staging or -s options:

smt-repos --enable-staging

You can enable the required repositories by entering the ID number or by entering the name and target. If you want to enable all repositories, enter a.

Generating Testing and Production Snapshots

To create the testing repository in the default staging group, run the following command:

smt-staging createrepo REPOSITORY_ID -–testing

You can then test the installation and functionality of the patches in testing clients. If testing was successful, create the production repository:

smt-staging createrepo REPOSITORY_ID --production

To create testing and production repositories in a named staging group, create the group and the repositories in this group:

smt-staging creategroup GROUPNAME TESTINGDIR PRODUCTIONDIR
smt-staging createrepo --group GROUPNAME REPOSITORY_ID -–testing
SMT-STAGING createrepo --group GROUPNAME REPOSITORY_ID -–production

This can be useful when you want to combine SLES11-SP1-Updates and SLES11-SP2-Updates of the sle-11-x86_64 architecture into one repository of a group:

smt-staging creategroup SLES11SP1-SP2-Up test-sp1-sp2 prod-sp1-sp2
smt-staging createrepo --group SLES11SP1-SP2-Up \
  SLES11-SP1-Updates sle-11-x86_64 --testing
smt-staging createrepo --group SLES11SP1-SP2-Up \
  SLES11-SP2-Updates sle-11-x86_64 --testing
smt-staging createrepo --group SLES11SP1-SP2-Up \
  SLES11-SP1-Updates sle-11-x86_64 --production
smt-staging createrepo --group SLES11SP1-SP2-Up \
  SLES11-SP2-Updates sle-11-x86_64 --production

Group names can contain the following characters: -_, a-z A-Z, and 0-9.

Filtering Patches

You can allow or forbid all or selected patches using the allow or forbid commands:

smt-staging forbid --patch ID
smt-staging forbid --category CATEGORYNAME
Signing Changed Repositories

Filtering one or more patches from a repository invalidates the original signature, and the repository needs to be signed again. The smt-staging createrepo command does that automatically, provided you configure the SMT server.

To enable signing of changed metadata, the admin needs to generate a new signing key. This can be done with GPG like this:

mkdir DIR
gpg --gen-key --homedir DIR
sudo mv DIR /var/lib/smt/.gnupg
sudo chown smt:users -R /var/lib/smt/.gnupg
sudo chmod go-rwx -R /var/lib/smt/.gnupg

The ID of the newly generated key can be obtained using the gpg --gen-key command. The ID must be added to the signingKeyID option in the /etc/smt.conf file.

At this point, the clients are not aware of the new key. Import the new key to clients during their registration as follows:

sudo -u smt gpg --homedir /var/lib/smt/.gnupg \
  --export -a SIGNING_KEYID \
  > /MIRRORTO/repo/keys/smt-signing-key.key

In this example, MIRRORTO is the base directory where repositories will be mirrored. After that, clients can import this key during the registration process.

Registering Clients in the Testing Environment

To register a client in the testing environment, modify the /etc/SUSEConnect file on the client machine:

namespace: testing

3.7 Repository Preloading

Deploying multiple SMT servers can take time if each new SMT server must mirror the same repositories.

To save time when deploying new SMT servers, the repositories can be preloaded from another server or disk instead. To do this, follow these steps:

  1. Enable the repositories to be mirrored with the SMT, for example:

    smt-repos -e SLES12-Updates sle-12_x86_64
  2. Perform a dry run of smt-mirror to create the required repository directories:

    smt-mirror -d --dryrun -L /var/log/smt/smt-mirror.log

    The following directories are created based on the repository above and the default MirrorTo:

    /srv/www/htdocs/repo/repoindex.xml
    /srv/www/htdocs/repo/$RCE/SLES12-Updates/sle-12-x86_64/*
  3. Then copy the repositories from another SMT server, for example:

    rsync -av 'smt12:/srv/www/htdocs/repo/\$RCE/SLES12-Updates/sle-12-x86_64/' \
     '/srv/www/htdocs/repo/$RCE/SLES12-Updates/sle-12-x86_64/'
  4. To get the repository data fixed, run the following command:

    smt-mirror -d -L /var/log/smt/smt-mirror.log
Important
Important: Possible Error Messages

Errors, such as repomd.xml is the same, but repo is not valid. Start mirroring., are considered normal. They occur because the metadata of the preloaded repositories in the server's database remains incorrect until the initial mirror of the repositories has completed.

4 Managing Repositories with YaST SMT Server Management

  • Filename: smt_management.xml
  • ID: smt.management

The YaST SMT Server Management module is designed to perform daily management tasks. It can be used to enable and disable the mirroring of repositories, the staging flag for repositories, and perform the mirroring and staging.

4.1 Starting SMT Management Module

SMT Management is a YaST module. There are two ways to start the module:

  • Start YaST and select Network Services, then SMT Server Management

  • Run the yast2 smt command in the terminal as root

This opens the SMT Management application window and switches to the Repositories section.

List of Repositories
Figure 4.1: List of Repositories

4.2 Viewing and Managing Repositories

In the Repositories section, you can see the list of all available package repositories for SMT. For each repository, the list shows the repository's name, target product and architecture, mirroring and staging flag, date of last mirroring, and a short description. Sort the list by clicking the desired column header, and scroll the list items using the scrollbar on the right side.

4.2.1 Filtering Repositories

You can also filter out groups of repositories using the Repository Filter text box. Enter the desired filter term and click Filter to see only the matching entries. To cancel the current filter and display all repositories, clear the Repository Filter field and click Filter.

Repository Filter
Figure 4.2: Repository Filter

4.2.2 Mirroring Repositories

Before you can offer package repositories, you need to create a local mirror of their packages. To do this, follow the procedure below.

  1. From the list, select the line containing the name of the repository you want to mirror.

  2. Click the selected line to highlight it.

  3. Click the Toggle Mirroring button in the lower-left part of the window. This enables the option in the Mirroring column of the selected repository. If the repository was already selected for mirroring, clicking the Toggle Mirroring button disables the mirroring.

  4. Hit the Mirror Now button to mirror the repository.

  5. A pop-up window appears with the information about mirroring status and result.

  6. Click OK to refresh the list of repositories.

Status of Mirroring Process
Figure 4.3: Status of Mirroring Process

4.3 Staging Repositories

After the mirroring is finished, you can stage the mirrored repositories. In SMT, staging is a process where you create either testing or production repositories based on the mirrored ones. The testing repository helps you examine the repository and its packages before you make them available in a production environment. To make repositories available for staging, follow the steps below.

  1. From the repository list, select the line containing the name of the repository you want to manage.

  2. Click the selected line to highlight it.

  3. Click the Toggle Staging button next to the Toggle Mirroring button. This enables the option in the Staging column of the selected repository. If the repository was already selected for staging before, clicking the Toggle Staging button disables staging.

  4. Repeat steps 1 to 3 for all directories you want to stage.

Important
Important: Toggle Staging Button Not Active

You can only stage the repositories that were previously selected for mirroring. Otherwise, the Toggle Staging button will disabled.

After you have mirrored the repositories and made them available for staging, click the Staging tab. In the upper-left part of the window, you will find the Repository Name drop-down box containing all repositories available for staging. The repository names there have the name of the attached staging group. Select the group you want to stage, and you should see a list of packages in this repository. For each patch, there is information about the patch name, its version and category, testing and production flags, and a short summary.

Next to the Repository Name drop-down box, there is a Patch Category filter. It can be used for listing only the patches that belong to one of the predefined categories.

If the selected repository allows for patch filtering, you can toggle the status flag for individual patches. This is done by clicking the Toggle Patch Status button.

Before creating a repository of packages that are available in the production environment, you need to create and test the testing repository. Select the From Full Mirror to Testing item from the Create Snapshot drop-down list. A small pop-up window appears informing you about the staging process. After the testing repository snapshot has been created, you should see the appropriate options enabled in the Testing column.

Testing Created Snapshot
Figure 4.4: Testing Created Snapshot
Important
Important: Creating a Production Snapshot

After you have enabled staging for an update repository, you need to create its production snapshot to make it available to the clients. Otherwise, the clients cannot find the update repository.

Select the From Testing to Production item from the Create Snapshot drop-down box. A small pop-up window appears informing you about linking the testing repository to the production one. After the production snapshot has been created, you should see the appropriate options enabled in the Production column. Also, a green check mark appears in the Repository Name drop-down box.

4.4 Jobs and Client Status Monitoring

For each client that is registered against the SMT server, SMT creates a job queue. To use the job queue, you need to install the smt-client package on the client. During the installation of the smt-client package, a cron job is created that runs the client executable /usr/sbin/smt-agent every three hours. The agent then asks the server if it has any jobs in the queue belonging to this client and executes these jobs. When there are no more jobs in the queue, the agent terminates completely. It is important to understand that jobs are not pushed directly to the clients when they get created. They are not executed until the client asks for them at the intervals specified in the cron job. Therefore, from the time a job is created on the server until it is executed on the client, a delay of several hours may occur.

Every job can have a parent job. This means that the child job only runs after the parent job has successfully finished. It is also possible to configure advanced timing and recurrence and persistence of jobs. You can find more details about SMT jobs in Section 7.1.2.3, “smt-job”.

When creating jobs, you need to specify the GUID of the target clients using the -g GUID parameter. Although the -g parameter can be specified multiple times in a single command, you cannot use wild cards to assign a job to all clients.

Currently, the following types of jobs are available:

Execute

Run commands on the client

Eject

Open, close, or toggle the CD tray of the client

Patchstatus

Report the status of installed patches

Reboot

Reboot the client

Softwarepush

Install packages

Update

Install available updates

Tip
Tip: Default Job Types

By default only softwarepush, patchstatus, and update jobs are allowed. To allow more types of jobs, append the job type to the ALLOWED_AGENTS list in /etc/sysconfig/smt-client.

All clients that register against the SMT server automatically get a persistent patchstatus job added to their job queue. This is also the case for clients without the smt-clients package (SUSE Linux Enterprise 10 and older, or non-SUSE based distributions). These clients appear with the Unknown patchstatus in the client lists. The patchstatus jobs for such clients are not required, and clients can safely be deleted to clean up the output of smt-job. Keep in mind that if you update a machine to SUSE Linux Enterprise 11 or later, you need to create the patchstatus job manually.

Whenever the client runs a patchstatus job, it compares the currently installed updates with what is available in the repositories on the SMT server. The job then reports back the number of missing patches that need to be installed in each of the four categories:

  • Security

  • Package Manager

  • Recommended

  • Optional

Tip
Tip: The --agreelicense Option

To install a package and its dependencies, the job type softwarepush is used. When creating this type of job, it is a good idea to use the --agreelicense option. If a package displays a license agreement and expects it to be accepted, the job will skip the package if --agreelicense is not specified. The smt-client command forwards the installation process to zypper, which does not consider a failed acceptance of a license agreement to be an error. This results in the job being completed successfully, even if the package is not installed. Using the --agreelicense option prevents this from happening.

4.4.1 Checking the Client Status with YaST

The Clients Status section of the SMT Management window provides the status information about all the clients that use the repositories on your SMT server. This information consists of two main parts: the list of the clients and the detailed information.

You can see the client's host name, the date and time of the last network contact with the SMT server, and its update status. The update status can be one of the following:

Up-to-date

The client packages are updated to their last version available in the production repository

Updates available

This status means that there are updates available for the client that are either optional or recommended

Critical

Either security patches or package manager patches are available for the client

Detailed information about the selected client is available in the lower part of the window. This usually includes extended status information and detailed information about the number and types of available updates.

Clients Status
Figure 4.5: Clients Status

The date and time in the Last Contact column is the last time contact of the server—even if it only ran the regular registration update script. This date is not the date of the last 'patchstatus' report. The smt-client command-line tool prints the correct date and calls it Patch Status Date. The smt-client -v command prints both dates: the patchstatus date and the last contact of the client system.

Note
Note: Hidden Patches

Some patches may not be visible if they are required by other patches that are only shown as available after the package manager patch or patches have been installed.

5 Managing Client Machines with SMT

  • Filename: smt_registration.xml
  • ID: smt.registration

SMT lets you register and manage client machines on SUSE Customer Center. Client machines must be configured to use SMT. For information about configuring clients to use SMT, see Chapter 8, Configuring Clients to Use SMT.

5.1 Listing Registered Clients

To list SMT-registered client machines, use the smt-list-registrations command. The following information is listed for each client: its Unique ID, Hostname, date and time of Last Contact with the SMT server, and the Software Product the client uses.

5.2 Deleting Registrations

To delete a registration from SMT and SUSE Customer Center, use the following command:

smt-delete-registration -g
Client_ID

To delete multiple registrations, the option -g can be used several times.

The ID of the client machine to be deleted can be determined from the output of the smt-list-registrations command.

5.3 Manual Registration of Clients at SUSE Customer Center

The smt-register command registers clients at SUSE Customer Center. This registers all unregistered clients and clients with data that changed since the last registration.

To register clients whose registration has failed, use the --reseterror option. This option resets the SCC registration error flag and tries to submit registrations again.

5.4 Scheduling Periodic Registrations of Clients at SUSE Customer Center

SMT module allows for the easy scheduling of client registrations. By default, registrations are scheduled to run every 15 minutes. To create or modify a new registration schedule, follow the steps below.

  1. Start YaST SMT Configuration module (yast2 smt-server).

  2. Go to the Scheduled SMT Job.

  3. Select any SCC Registration job and click Edit to change its schedule.

    To create a new registration schedule, click Add and select SCC Registration as Job to Run.

  4. Choose the Frequency of the scheduled SMT job. You can perform jobs Daily, Weekly, Monthly, or Periodically (every n-th hour or every m-th minute).

    Set the Job Start Time by entering the Hour and Minute or appropriate time periods. For weekly and monthly schedules, select the Day of the Week or the Day of the Month the mirroring should occur.

    Note
    Note: Lowest Registration Frequency

    Do not set the frequency lower than 10 minutes, because the maximum value of the rndRegister is 450 (7.5 minutes). If the frequency is lower, it may happen that the started process is still sleeping when the next process starts. This causes the second request to exit.

  5. Click OK or Add and Finish.

Scheduling of SMT jobs in general is covered in Section 2.5, “Setting the SMT Job Schedule with YaST”.

YaST uses cron to schedule SUSE Customer Center registrations and other SMT jobs. If you prefer not to use YaST, you can use cron directly.

To disable automatic registration, change the forwardRegistration value in the [LOCAL] section of the /etc/smt.conf configuration file to false.

5.5 Compliance Monitoring

To assist customers in monitoring their license compliance, SMT generates a weekly report based on data from SMT and SUSE Customer Center. This report contains information about statistics of the registered machines, products used, and of the active, expiring or missing license subscriptions. If subscriptions are about to expire and/or more SUSE Linux Enterprise machines are registered than you have purchased licenses for, the report contains relevant warnings.

To calculate the compliance, the smt-report tool by default downloads information about the subscriptions and registrations (this can be disabled).

You can configure the recipient addresses for the reports in the Database and Reporting section of the YaST Subscription Management Tool configuration module. All of the e-mail configuration options are located in the [REPORT] section of /etc/smt.conf and explained in Section 7.2.1.6, “[REPORT] Section of /etc/smt.conf”.

The scheduling of the reports is configured in /etc/cron.d/novell.com-smt, while the parameters to use with the cron jobs are in the REPORT_PARAMS section of /etc/smt.d/smt-cron.conf.

Describing the content of the reports is beyond the scope of this section, but a set of reports can be split into five individual parts. By default, these reports are attached as individual files to the mail on the weekly report run. The alerts report is a normal text file while the others are in CSV format. The reports can also be created in PDF or XML by specifying --pdf or --xml as output format.

To generate a set of reports as CSV files based on local data and to display them in the standard output, run the following command:

smt-report --local --csv --file /root/smt-local-rep
Tip
Tip: Directory for Reports

The example stores the reports in the /root directory. You can change it to any other writable directory.

The command generates the following files:

/root/smt-local-rep-product_subscription_active.csv
/root/smt-local-rep-product_subscription_alerts.txt
/root/smt-local-rep-product_subscription_expired.csv
/root/smt-local-rep-product_subscription_expiresoon.csv
/root/smt-local-rep-product_subscription_summary.csv
Note
Note: Multiple SMT Servers

If you have multiple SMT servers, the reports may not include all SMT servers or machines in your environment. For the complete statistics of all your registered machines, refer to the information provided by SUSE Customer Center.

For more information about types of reports, output formats, and targets refer to Chapter 6, SMT Reports.

6 SMT Reports

  • Filename: smt_reporting.xml
  • ID: smt.reporting

This chapter explains how to generate reports using the data from the SMT and SUSE Customer Center. These reports contain statistics of all the registered machines, products used and all active, expiring or missing subscriptions.

Note
Note: Assignment of Reports

If you are using more than one SMT server, generated reports may not include all SMT servers or machines in your environment. For the complete statistics of all your registered machines, refer to the information in the SUSE Customer Center.

6.1 Report Schedule and Recipients

Generated SMT reports can be periodically sent to a list of specified e-mail addresses. To create or edit this list and to set the frequency of the reports, use the YaST SMT Configuration module. How to configure this list is described in Section 2.4, “Setting E-mail Addresses to Receive Reports with YaST”. Configuration of the report schedule is covered in Section 2.5, “Setting the SMT Job Schedule with YaST”.

The list can also be edited manually in the reportEmail part of the /etc/smt.conf configuration file. For more information about manually editing the list of addresses, see Section 7.2.1.6, “[REPORT] Section of /etc/smt.conf”. To set the frequency of reports manually, you can edit the smt-gen-report lines of the crontab in /etc/cron.d/novell.com-smt. For more information about the crontab format, see man 5 crontab.

Reports, including those generated as a scheduled SMT job, are created by the smt-report command. This command supports various parameters. To edit parameters used with scheduled commands, edit the /etc/smt.d/smt-cron.conf configuration file. For more information, see Section 7.2.2, “/etc/smt.d/smt-cron.conf”.

6.2 Report Output Formats and Targets

SMT reports can be printed to the standard output, exported to one or multiple files (in the CSV format), and mailed to a specified list of e-mail addresses. The following parameters can be used with the smt-report command:

--quiet or -q

Suppress output to STDOUT and run smt-report in quiet mode.

--file or -F

Export the report to one or several files. By default, the report is written to a single file, with the results formatted as tables. Optionally, the file name or whole path may be specified after the parameter: --file FILENAME. If no file name is specified, the default file name containing a time stamp is used. However, SMT will not check if the file or files already exist.

In the CSV (Comma-Separated Value) mode, the report is written to multiple files, therefore the specified file name expands to [PATH/]FILENAME-reportname.extension for every report.

--csv or -c

The report is exported to multiple files in the CSV format. The first line of each *.csv file consists of the column names. It is recommended to use the --csv parameter together with the --file parameter. If the specified file name contains a .csv extension, the report format will be CSV (as if the --csv parameter was used).

--mail or -m

Send the report to the addresses configured using the YaST SMT Configuration module and stored in /etc/smt.conf. The report is rendered as tables.

--attach or -a

Attach the report to the mails in the CSV format. This option should only be used in combination with the --mail option.

--pdf

The report is exported to multiple files in the PDF format.

--xml

The report is exported to multiple files in the XML format.

Note
Note: Disabling Sending Attachments

To disable sending CSV attachments with report mails, edit the /etc/smt.d/smt-cron.conf configuration file as follows: remove the --attach option from the REPORT_PARAMS value. The default line reads: REPORT_PARAMS="--mail --attach -L /var/log/smt-report.log". To disable CSV attachments, change it to: REPORT_PARAMS="--mail -L /var/log/smt-report.log".

If you have disabled CSV attachments but need them occasionally, you can send them manually with the smt-report --mail --attach -L /var/log/smt-report.log command.

7 SMT Tools and Configuration Files

  • Filename: smt_tools.xml
  • ID: smt.tools

This chapter describes the most important scripts, configuration files and certificates shipped with SMT.

7.1 Important Scripts and Tools

  • Filename: smt_scripts.xml
  • ID: smt.scripts

There are two important groups of SMT commands: The smt command and its sub-commands are used for managing the mirroring of updates, registration of clients, and reporting. The systemd smt.target is used for starting, stopping, restarting the SMT service and services that SMT depends on, and for checking their status.

7.1.1 SMT JobQueue

Since SUSE Linux Enterprise version 11, there is a new SMT service called SMT JobQueue for delegating jobs to the registered clients.

To enable JobQueue, the smt-client package needs to be installed on the SMT client. The client then pulls jobs from the server via a cron job (every 3 hours by default). The list of jobs is maintained on the server. Jobs are not pushed directly to the clients and processed immediately: instead, the client asks for them. Therefore, a delay of several hours may occur.

Every job can have its parent job, which sets a dependency. The child job only runs after the parent job successfully finished. Job timing is also possible: a job can have a start time and an expiration time to define its earliest execution time or the time the job will expire. A job may also be persistent. It is run repeatedly with a delay. For example, a patch status job is a persistent job that runs once a day. For each client, a patch status job is automatically generated after it registers successfully against an SMT 11 server. The patchstatus information can be queried with the smt-client command. For already registered clients, you can add patchstatus jobs manually with the smt-job command.

You can edit, list, create, and delete the jobs using the smt-job command-line tool. For more details on smt-job, see Section 7.1.2.3, “smt-job”.

Note
Note: Overriding the Automatic Creation of Patch Status Jobs

When creating a software push or an update job, normally a non-persistent patch status job is added automatically. The parent ID is set to the ID of the new job. To disable this behavior, use the --no-autopatchstatus option.

SMT is not intended to be a system to directly access the clients or to immediately report the results back. It is a long-term maintenance and monitoring system rather than a live interaction tool.

Note
Note: Job Time Lag Limitation

The client normally processes one job at a time, reports back the result, and then asks for the next job. If you create a persistent job with a time offset of only a few seconds, it will be repeated forever and will block other jobs. Therefore, adding jobs with a time offset shorter than one minute is not supported.

7.1.2 /usr/sbin/smt Commands

The key command to manage the SMT is smt (/usr/sbin/smt). The smt command should be used together with various sub-commands described in this section. If the smt command is used alone, it prints a list of all available sub-commands. To get help for individual sub-commands, use smt SUBCOMMAND --help.

The following sub-commands are available:

  • smt-client

  • smt-delete-registration

  • smt-job

  • smt-list-products

  • smt-list-registrations

  • smt-mirror

  • smt-scc-sync

  • smt-register

  • smt-report

  • smt-repos

  • smt-setup-custom-repos

  • smt-staging

  • smt-support

  • smt-sync

There are two syntax types you can use with the smt command: smt followed by a sub-command or a single command consisting of smt followed by the dash and the desired sub-command. For example, it is possible to use either smt mirror or smt-mirror, as both have the same meaning.

Note
Note: Conflicting Commands

Depending on your $PATH environment variable, the SMT smt command (/usr/sbin/smt) may collide with the smt command from the star package (/usr/bin/smt). Either use the absolute path /usr/sbin/smt, create an alias, or set your $PATH accordingly.

Another solution is to always use the smt- SUBCOMMAND syntax.

7.1.2.1 smt-client

The smt-client command shows information about registered clients. The information includes the following:

  • guid

  • host name

  • patch status

  • time stamps of the patch status

  • last contact with the SMT server

The smt-client supports the following options:

--verbose or -v

Shows detailed information about the client. The last contact date is shown as well.

--debug or -d

Enables debugging mode.

--logfile or -L with the parameter LOGFILE

Specifies the file the log will be written to.

--hostname or -h with the parameter HOSTNAME

Lists the entries whose host name begins with HOSTNAME.

--guid or -g with the parameter ID

Lists the entries whose GUID is ID.

--severity or -s with the parameter LEVEL

Filters the result by the patch status information. The value level can be one of packagemanager, security, recommended or optional.

7.1.2.2 smt-delete-registration

The smt-delete-registration command deletes one or more registrations from SMT and SUSE Customer Center. It unregisters machines from the system. The following options are available:

--guid or -g with the parameter ID

Deletes the machine with the guid ID from the system. You can use this option multiple times.

--debug or -d

Enables debugging mode.

7.1.2.3 smt-job

The smt-job script manages jobs for individual SMT clients. You can this command to list, create, edit, and delete jobs. The following options are available:

--list or -l

Lists all client jobs. This is the default if the operation mode switch is omitted.

--verbose or -v with the parameter LEVEL

Shows detailed information about a job or jobs in a list mode. The level value can be a number from 0 to 3. The higher the value, the more verbose the command is.

--create or -c

Creates a new job.

--edit or -e

Edits an existing job.

--delete or -d

Deletes an existing job.

--guid or -g with the parameter ID

Specifies the client's guid. This parameter can be used multiple times to create a job for more than one client.

--jobid or -j with the parameter ID

Specifies the job ID. You need to specify job ID and client's guid when editing or deleting a job, as the same job for multiple clients has the same job ID.

--deleteall or -Aid

Omit either the client's guid or the job ID in the delete operation. The missing parameter will match all respective jobs.

--type or -t with the parameter TYPE

Specifies the job type. The type can be one of patchstatus, softwarepush, update, execute, reboot, wait, eject. On the client, only the following job types are enabled by default: patchstatus, softwarepush and update.

--description DESCRIPTION

Specifies a job description.

--parentID

Specifies the job ID of the parent job. Use it to define a dependency. A job will not be processed until its parent has successfully finished.

--name or -n with the parameter NAME

Specifies a job name.

--persistent

Specifies if a job is persistent. Non-persistent jobs are processed only once, while persistent jobs are processed again and again. Use --timelag to define the time that elapses until the next run.

--finished

Search option for finished jobs.

--targetedtime

Specifies the earliest execution time of a job. Note that the job most likely will not run exactly at that point in time, but probably some minutes or hours after. The reason is that the client polls in a fixed interval for jobs.

--expirestime

Defines when the job will no longer be executed anymore.

--timelagtime

Defines the time interval for persistent jobs.

For a complete list of available options and their explanations, see the manual page of the smt-job command (man smt-job).

7.1.2.3.1 Examples

List all finished jobs:

smt-job --list --finished

Create a softwarepush job that installs xterm and bash on client 12345 and 67890:

smt-job --create -t softwarepush -P xterm -P bash -g 12345 -g 67890

Change the timing for a persistent job with job ID 42 and guid 12345 to run every 6 hours:

smt-job --edit -j 42 -g 12345 --targeted 0000-00-00 --timelag 06:00:00

Delete all jobs with job ID 42:

smt-job --delete -jobid 42 --deleteall

7.1.2.4 smt-list-products

The smt-list-products script lists all software products in the SMT database. The following options are available:

--used or -u

Shows only used products.

--catstat or -c

Shows whether all repositories needed for a product are locally mirrored.

7.1.2.5 smt-list-registrations

The smt-list-registrations script lists all registrations. There are two options available for this command:

--verbose or -v

Shows detailed information about the registered devices.

--format or -f with the parameter FORMAT

Formats the output in the asciitable or csv formats.

7.1.2.6 smt-mirror

The smt-mirror command performs the mirroring procedure and downloads repositories that are set to be mirrored.

You can run the smt-mirror with the following options:

--clean or -c

Removes all files no longer mentioned in the metadata from the mirror. No mirroring occurs before cleanup.

--debug or -d

Enables the debugging mode.

--deepverify

Turns on verifying of all package checksums.

--hardlink SIZE

Searches for duplicate files with a size greater than the size specified in kilobytes. Creates hard links for them.

--directory PATH

Defines the directory to work on. When using this option, the default value configured in the smt.conf configuration file is ignored.

--dbreplfile FILE

Defines a path to the *.xml file to use as database replacement. You can create this file with the smt-scc command.

--logfile or -L with the parameter FILE

Specifies the path to a log file.

7.1.2.7 smt-sync

The smt-sync or smt sync command obtains data from SUSE Customer Center and updates the local SMT database. It can also save SUSE Customer Center data to a directory instead of the SMT database, or read the data from such a directory instead of downloading it from SUSE Customer Center.

For SUSE Linux Enterprise 11 clients, this script automatically determines whether Novell Customer Center or SUSE Customer Center should be used. Then smt-ncc-sync or smt-scc-sync is called. For SUSE Linux Enterprise 12 clients, only smt-scc-sync is supported.

7.1.2.8 smt-scc-sync

The smt scc-sync command obtains data from the SUSE Customer Center and updates the local SMT database. It can also save SUSE Customer Center data to a directory instead of the SMT database, or read SUSE Customer Center data from a directory instead of downloading it from SUSE Customer Center.

You can run the smt-scc-sync with the following options:

--fromdir DIRECTORY

Reads SUSE Customer Center data from a directory instead of downloading it from SUSE Customer Center.

--todir DIRECTORY

Writes SUSE Customer Center data to the specified directory without updating the SMT database.

Tip
Tip: SUSE Manager's Subscription Matching Feature

This data can be used by the subscription matching feature of SUSE Manager, which gives you a detailed overview of your subscription usage. For more information on the subscription matching feature, see https://www.suse.com/documentation/suse-manager-3/book_suma_reference_manual/data/ref_webui_audit_subscription.html

--createdbreplacementfile

Creates a database replacement file for using smt-mirror without database.

--logfile or -L with the parameter LOGFILE

Specifies the path to a log file.

--debug

Enables debugging mode.

7.1.2.9 smt-register

The smt-register or smt register command registers all currently unregistered clients at the SUSE Customer Center. It also registers all clients whose data has changed since the last registration.

The following options are available:

--logfile or -L with the parameter LOGFILE

Specifies the path to a log file.

--debug

Enables debugging mode.

7.1.2.10 smt-report

The smt-report or smt report command generates a subscription report based on local calculation or SUSE Customer Center registrations.

The following options are available:

--mail or -m

Activates mailing the report to the addresses configured with the SMT Server and written in /etc/smt.conf. The report is formatted as tables.

--attach or -a

Appends the report to the e-mails in CSV format. This option should only be used in combination with the --mail option.

--quiet or -q

Suppresses output to STDOUT and runs smt-report in quiet mode.

--csv or -c

Exports the report to multiple files in the CSV format. The first line of each *.csv file consists of the column names. The --csv parameter should only be used in combination with the --file parameter. If the specified file name has the .csv extension, the report is formatted as CSV (as if the --csv parameter was used).

--pdf or -p

Exports the report in the PDF format. Use it only in combination with the -file option.

--xml

Exports the report in the XML format. Use it only in combination with the -file option. For a detailed description of the XML format, see the manual page of the smt-report command.

--file or -F

Exports the report to one or several files. By default, the report is written to a single file formatted as tables. Optionally, the file name or whole path may be specified after the --file filename parameter. If no file name is specified, a default file name containing a time stamp is used. However, SMT does not check if the file or files already exist.

In the CSV mode the report is written to multiple files, therefore the specified file name expands to [PATH/]FILENAME-reportname.extension for every report.

--logfile or -L with the parameter LOGFILE

Specifies path to a log file.

--debug

Enables debugging mode.

7.1.2.11 smt-repos

Use smt-repos (or smt repositories) to list all available repositories and for enabling, disabling, and deleting repositories. The following options are available:

--enable-mirror or -e

Enables repository mirroring.

--disable-mirror or -d

Disables repository mirroring.

--enable-by-prod or -p

Enables repository mirroring by giving product data in the following format: Product[,Version[,Architecture[,Release]]].

--disable-by-prod or -P

Disables repository mirroring by giving product data in the following format: Product[,Version[,Architecture[,Release]]].

--enable-staging or -s

Enables repository staging.

--disable-staging or -S

Disables repository staging.

--only-mirrorable or -m

Lists only repositories that can be mirrored.

--only-enabled or -o

Lists only enabled repositories.

--delete

Lists repositories and deletes them from disk.

--namespace DIRNAME

Deletes the repository in the specified name space.

--verbose or -v

Shows detailed repository information.

7.1.2.12 smt-setup-custom-repos

The smt-setup-custom-repos and smt setup-custom-repos script are designed for setting up custom repositories (repositories not present in the download server) for use with SMT. You can use this script to add a new repository to the SMT database or to delete a repository from the database. The script supports the following options:

--productid PRODUCT_ID

ID of a product the repository belongs to. If a repository should belong to multiple products, use this option multiple times to assign the repository to all relevant products.

--name NAME

The name of the custom repository.

--description DESCRIPTION

The description of the custom repository.

--exturl URL

The URL for the repository to be mirrored from. Only HTTP and HTTPS protocols are supported.

--delete ID

Removes a custom repository with a given ID from the SMT database.

To set up a new repository, use the following command:

smt-setup-custom-repos --productid PRODUCT_ID \
--name NAME --exturl URL

For example:

smt-setup-custom-repos --productid 434 \
--name My_Catalog --exturl http://my.example.com/My_Catalog

To remove a configured repository, use the following command:

smt-setup-custom-repos --delete ID

For example:

smt-setup-custom-repos --delete 1cf336d819e8e5904f4d4b05ee081971a0cc8afc

7.1.2.13 smt-staging

A patch is an update of a package or group of packages. The term update and patch are often interchangeable. With the smt-staging script, you can set up patch filters for update repositories. It can also help you generate both testing repositories and repositories for the production environment.

The first argument of smt-staging is always the command. It must be followed by a repository. The repository can be specified by Name and Target from the table scheme returned by the smt-repos command. Alternatively, it can be specified by its Repository ID which can be obtained by running the command smt-repos -v. The smt-staging script supports the following commands:

listupdates

Lists available patches and their allowed and forbidden status.

allow/forbid

Allows or forbids specified patches.

createrepo

Generates both testing and production repository with allowed patches.

status

Gives information about both testing and production snapshots, and patch counts.

listgroups

Lists staging groups.

There is always one group available with the name default. The default group has the path repo/full, repo/testing and repo. New paths can be specified when creating a new group.

creategroup

Creates a staging group. Required parameters are: group name, testing directory name, and production directory name.

removegroup

Removes a staging group. The group name parameter is required.

The following options apply to any smt-staging command:

--logfile or -Lfile path

Writes log information to the specified file. It is created if it does not already exist.

--debug or -d

Turns on the debugging output and log.

--verbose or -v

Turns more detailed output on.

The following options apply to specific smt-staging commands:

--patch PATCH_ID

Specifies a patch by its ID. You can get a list of available patches with the listupates command. This option can be used multiple times. Use it with the allow, forbid, and listupdates commands. When used with listupdates, the command prints detailed information about the specified patches.

--category CATEGORY

Specifies the patch category. The following categories are available: security, recommended and optional. Use it in combination with the allow, forbid, and listupdates commands.

--all

Allows or forbids all patches in the allow or forbid commands.

--individually

Allows or forbids multiple patches (for example by category) one by one similar to the --patch option used with each of the patches.

--testing

Generates a repository for testing when used in combination with the createrepo command. The repository is generated from the full unfiltered local mirror of the remote repository. It is written into the <MirrorTo>/repo/testing directory, where MirrorTo is the value obtained from smt.conf.

--production

Generates a repository for production when used in combination with the createrepo command. The repository is generated from the testing repository. It is written into the <MirrorTo>/repo directory, where MirrorTo is the value obtained from smt.conf. If the testing repository does not exist, the production repository is generated from the full unfiltered local mirror of the remote repository.

--group GROUP

Specifies on which group the command should work. The default for --group is the name default.

--nohardlink

Prevents creating hard links instead of copying files when creating a repository with the createrepo command. If not specified, hard links are created instead.

--nodesc

Skips patch descriptions and summaries—to save some screen space and make the output more readable.

--sort-by-version

Sorts the listupdates table by patch version. The higher the version, the newer the patch should be.

--sort-by-category

Sorts the listupdates table by patch category.

7.1.2.14 smt-support

The smt-support command manages uploaded support data usually coming from the supportconfig tool. You can forward the data to SUSE, either selectively or in full. This command supports the following options:

--incoming or -i with the parameter DIRECTORY

Specifies the directory where the supportconfig archives are uploaded. You can also set this option with the SMT_INCOMING environment variable. The default SMT_INCOMING directory is /var/spool/smt-support.

--list or -l

Lists the uploaded supportconfig archives in the incoming directory.

--remove or -r with the parameter ARCHIVE

Deletes the specified archive.

--empty or -R

Deletes all archives in the incoming directory.

--upload or -u with the parameter ARCHIVE

Uploads the specified archive to SUSE. If you specify -s, -n, -c, -p, and -e options, the archive is repackaged with contact information.

--uploadall or -U

Uploads all archives in the incoming directory to SUSE.

--srnum or -s with the parameter SR_NUMBER

Specifies the Novell Service Request 12-digit number.

--name or -n with the parameter NAME

Specifies the first and last name of the contact, in quotes.

--company or -c with the parameter COMPANY

Specifies the company name.

--storeid or -d with the parameter ID

Specifies the store ID, if applicable.

--terminalid or -t with the parameter ID

Specifies the terminal ID, if applicable.

--phone or -p with the parameter PHONE

Specifies the phone number of the contact person.

--email or -e with the parameter E-MAIL_ADDRESS

Specifies the e-mail address of the contact.

7.1.3 SMT systemd Commands

You can manage SMT-related services with the standard systemd commands:

systemctl start smt.target

Starts the SMT services.

systemctl stop smt.target

Stops the SMT services.

systemctl status smt.target

Checks the status of the SMT services. Checks whether httpd, Maria DB, and cron are running.

systemctl restart smt.target

Restarts the SMT services.

systemctl try-restart smt.target

Checks whether the SMT services are enabled and if so, restarts them.

You can enable and disable SMT with the YaST SMT Server module.

7.2 SMT Configuration Files

  • Filename: smt_config_files.xml
  • ID: no ID found

The main SMT configuration file is /etc/smt.conf. You can set most of the options with the YaST SMT Server module. Another important configuration file is /etc/smt.d/smt-cron.conf, which contains parameters for commands launched as SMT scheduled jobs.

7.2.1 /etc/smt.conf

The /etc/smt.conf file has several sections. The [NU] section contains the update credentials and URL. The [DB] section contains the configuration of the Maria DB database for SMT. The [LOCAL] section includes other configuration data. The [REPORT] section contains the configuration of SMT reports.

Warning
Warning: Passwords in Clear Text

The /etc/smt.conf file contains passwords in clear text. Its default permissions (640, root, wwwrun) make its content easily accessible with scripts running on the Apache server. Be careful with running other software on the SMT Apache server. The best policy is to use this server only for SMT.

7.2.1.1 [NU] Section of /etc/smt.conf

The following options are available in the [NU] section:

NUUrl

URL of the update service. Usually it should contain the https://updates.suse.com/ URL.

NURegUrl

URL of the update registration service. It is used by smt-sync. If this option is missing, the URL from /etc/SUSEConnect is used as a fallback.

NUUser

NUUser should contain the user name for update service. For information about getting organization credentials, see Section 3.1, “Mirroring Credentials”. You can set this value with the SMT Server.

NUPass

NUPass is the password for the user defined in NUUser. For information about getting organization credentials, see Section 3.1, “Mirroring Credentials”. You can set this value with the SMT Server.

ApiType

ApiType is the type of service SMT uses; it can be either NCC for Novell Customer Center or SCC for SUSE Customer Center. The only supported value for SMT 12 is SCC.

7.2.1.2 [DB] Section of /etc/smt.conf

The three options defined in the [DB] section are used for configuring the database for SMT. Currently, only Maria DB is supported by SMT.

config

The first parameter of the DBI->connect Perl method used for connection to the Maria DB database. The value should be in the form

dbi:mysql:database=SMT;host=LOCALHOST

where SMT is the name of the database and LOCALHOST is the host name of the database server.

user

The user for the database. The default value is smt.

pass

The password for the database user. You can set the password with the YaST SMT Server module.

7.2.1.3 [LOCAL] Section of /etc/smt.conf

The following options are available in the [LOCAL] section:

url

The base URL of the SMT server which is used to construct URLs of the repositories available on the server. This value should be set by YaST automatically during installation. The format of this option should be: https://server.domain.tld/.

You can change the URL manually. For example, the administrator may choose to use the http:// scheme instead of https:// for performance reasons. Another reason may be using an alias (configured with CNAME in DNS) instead of the host name of the server. For example, http://smt.domain.tld/ instead of http://server1.domain.tld/.

nccEmail

E-mail address used for registration at the SUSE Customer Center. The SMT administrator can set this value with the YaST SMT Server module.

MirrorTo

Determines the path to mirror to.

MirrorAll

If the MirrorAll option is set to true, the smt-sync script will set all repositories that can be mirrored to be mirrored (DOMIRROR flag).

MirrorSRC

If the MirrorSRC option is set to true, source RPM packages are mirrored.

Note
Note: Default Value Changed with SMT 11 SP2

With SMT 11 SP2, the preset default value was changed to false. If you also want SMT to mirror source RPM packages on new installations, set MirrorSRC to true.

Upgraded systems are not affected.

forwardRegistration

For SMT 11, this option determined whether the clients registered at SMT should be registered at Novell Customer Center, too. This option does not work with SUSE Customer Center yet.

rndRegister

Specify a delay in seconds before the clients are registered at SUSE Customer Center. The value is a random number between 0 and 450, generated by the YaST SMT Server module. The purpose of this random delay is to prevent a high load on the SUSE Customer Center server that would occur if all smt-register cron jobs connected at the same time.

mirror_preunlock_hook

Specify the path to the script that will be run before the smt-mirror script removes its lock.

mirror_postunlock_hook

Specify the path to the script that will be run after the smt-mirror script removes its lock.

HTTPProxy

If you do not want to use global proxy settings, specify the proxy to be used for HTTP connection here. Use the following form: http://PROXY.example.com:3128.

If the proxy settings are not configured in /etc/smt.conf, the global proxy settings configured in /etc/syconfig/proxy are used. You can configure the global proxy settings with the YaST Proxy module.

HTTPSProxy

If you do not want to use global proxy settings, specify the proxy to be used for HTTPS connection here. Use the form: https://PROXY.example.com:3128.

If the proxy settings are not configured in /etc/smt.conf, the global proxy settings configured in /etc/syconfig/proxy are used. You can configure the global proxy settings with the YaST Proxy module.

ProxyUser

If your proxy requires authentication, specify a user name and password here, using the USERNAME:PASSWORD format.

If the proxy settings are not configured in /etc/smt.conf, the global proxy settings configured in /etc/syconfig/proxy are used. You can configure the global proxy settings with the YaST Proxy module.

Tip
Tip: Global User Authentication Setting

If you configure the global proxy settings with YaST, manually copy /root/.curlrc to the home directory of the smt. Adjust the permissions with the following commands as root:

cp /root/.curlrc /var/lib/smt/
chown smt:www /var/lib/smt/.curlrc
requiredAuthType

Specify an authentication type to access the repository. There are three possible types:

  • none - no authentication is required. This is the default value.

  • lazy - only user name and password are checked. A valid user can access all repositories.

  • strict - checks also if the user has access to the repository.

smtUser

Specify a user name of a Unix user under which all smt commands will run.

signingKeyID

Specify the ID of the GPG key to sign modified repositories. The user specified under smtUser needs to have access to the key. If this option is not set, the modified repositories will be unsigned.

7.2.1.4 [REST] Section of /etc/smt.conf

The following options are available in the [REST] section:

enableRESTAdminAccess

If set to 1, turns administrative access to the SMT RESTService on. Default value is 0.

RESTAdminUser

Specify the user name that the REST-Admin uses to log in. Default value is RESTroot.

RESTAdminPassword

Specify the password for the REST-Admin user. The option has no default value. An empty password is invalid.

7.2.1.5 [JOBQUEUE] Section of /etc/smt.conf

The following options are available in the [JOBQUEUE] section:

maxFinishedJobAge

Specify the maximum age of finished non-persistent jobs in days. Default value is 8.

jobStatusIsSuccess

Specify a comma separated list of JobQueue status IDs that should be interpreted as successful. For more information about possible status IDs, see smt-job --help. Leaving this option empty is interpreted as default (1,4).

7.2.1.6 [REPORT] Section of /etc/smt.conf

The following options are available in the [REPORT] section:

reportEmail

A comma separated list of e-mail addresses to send SMT status reports to. You can set this list with the YaST SMT Server module.

reportEmailFrom

From field of report e-mails. If not set, the default root@HOSTNAME.DOMAINNAME will be used.

mailServer

Relay mail server. If empty, e-mails are sent directly.

mailServerPort

Port of the relay mail server set in mailServer.

mailServerUser

User name for authentication to the mail server set in mailServer.

mailServerPassword

Password for authentication to the mail server set in mailServer.

7.2.1.7 Example /etc/smt.conf

Example 7.1: smt.conf
[NU]
NUUrl=https://updates.suse.com/
NURegUrl=https://scc.suse.com/connect
NUUser = exampleuser
NUPass = examplepassword
ApiType = SCC

[DB]
config = dbi:mysql:database=smt;host=localhost
user = smt
pass = smt

[LOCAL]
# Default should be http://server.domain.top/
url = http://smt.example.com/
# This email address is used for registration at SCC
nccEmail = exampleuser@example.com
MirrorTo = /srv/www/htdocs
MirrorAll = false
MirrorSRC = false
forwardRegistration = true
rndRegister = 127
# The hook script that should be called before the smt-mirror script removes its lock
mirror_preunlock_hook =
# The hook script that should be called after the smt-mirror script removed its lock
mirror_postunlock_hook =
# specify proxy settings here, if you do not want to use the global proxy settings
# If you leave these options empty the global options are used.
#
# specify which proxy you want to use for HTTP connection
# in the form http://proxy.example.com:3128
HTTPProxy =
# specify which proxy you want to use for HTTPS connection
# in the form http://proxy.example.com:3128
HTTPSProxy =
# specify username and password if your proxy requires authentication
# in the form username:password
ProxyUser =
#
# require authentication to access the repository?
# Three possible authtypes can be configured here
# 1) none   : no authentication required (default)
# 2) lazy   : check only username and password. A valid user has access to all repositories
# 3) strict : check also if this user has access to the repository.
#
requiredAuthType = none
#
# the smt commands should run with this unix user
#
smtUser = smt
#
# ID of the GPG key to be used to sign modified (filtered) repositories.
# The key must be accessible by the user who runs SMT, i.e. the user specified
# in the 'smtUser' configuration option.
#
# If empty, the modified repositories will be unsigned.
#
signingKeyID =
#
# This string is sent in HTTP requests as UserAgent.
# If the key UserAgent does not exist, a default is used.
# If UserAgent is empty, no UserAgent string is set.
#
#UserAgent=
# Organization credentials for this SMT server.
# These are currently only used to get list of all available repositories
# from https://your.smt.url/repo/repoindex.xml
# Note: if authenticated as a client machine instead of these mirrorUser,
# the above URL returns only repositories relevant for that client.
mirrorUser =
mirrorPassword =

[REST]
# Enable administrative access to the SMT RESTService by setting enableRESTAdminAccess=1
# default: 0
enableRESTAdminAccess = 0
# Define the username the REST-Admin uses for login
# default: RESTroot
RESTAdminUser = RESTroot
# Define the password for the REST-Admin (note: empty password is invalid)
# default: <empty>
RESTAdminPassword =

[JOBQUEUE]
# maximum age of finished (non-persistent) jobs in days
# default: 8
maxFinishedJobAge = 8
# comma separated list of JobQueue status IDs that should be interpreted as successful
# See smt-job --help for more information about possible Status IDs
# Please note: An empty string will be interpreted as default (1,4).
# default: 1,4
# useful:  1,4,6
jobStatusIsSuccess = 1,4

[REPORT]
# comma separated list of eMail addresses where the status reports will be sent to
reportEmail = exampleuser@example.com
# from field of report mails - if empty it defaults to "root@<hostname>.<domainname>"
reportEmailFrom =
# relay mail server - leave empty if mail should be sent directly
mailServer =
mailServerPort =
# mail server authentication - leave empty if not required
mailServerUser =
mailServerPassword =

7.2.2 /etc/smt.d/smt-cron.conf

The /etc/smt.d/smt-cron.conf configuration file contains options of the SMT commands launched as SMT scheduled jobs set with YaST (see Section 2.5, “Setting the SMT Job Schedule with YaST”). Cron is used to launch these scheduled jobs. The cron table is located in the /etc/cron.d/novell.com-smt file.

SCC_SYNC_PARAMS

Contains parameters of the smt scc-sync command, if called as part of an SMT scheduled job via cron. The default value is "-L /var/log/smt/smt-sync.log --mail".

MIRROR_PARAMS

Contains parameters of the smt mirror command, if called as part of an SMT scheduled job via cron. The default value is "-L /var/log/smt/smt-mirror.log --mail" .

REGISTER_PARAMS

Contains parameters of the smt register command, if called as part of an SMT scheduled job via cron. The default value is "-r -L /var/log/smt/smt-register.log --mail" .

REPORT_PARAMS

Contains parameters of the smt report command, if called as part of an SMT scheduled job via cron. The default value is "--mail --attach -L /var/log/smt/smt-report.log" .

JOBQUEUECLEANUP_PARAMS

Contains parameters for smt jobqueue cleanup, if called as a part of an SMT scheduled job via cron. The default value is "--mail -L /var/log/smt/smt-jobqueuecleanup.log".

7.3 Server Certificates

  • Filename: smt_certificates.xml
  • ID: smt.tools.cert

For communication between the SMT server and client machines, the encrypted HTTPS protocol is used, requiring a server certificate. If the certificate is not available, or if clients are not configured to use the certificate, the communication between server and clients will fail.

Every client must be able to verify the server certificate by trusting the CA (certificate authority) certificate that signed the server certificate. Therefore, the SMT server provides a copy of the CA at /srv/www/htdocs/smt.crt. This CA can be downloaded from every client via the URL http://FQDN/smt.crt. The copy is created by the /usr/lib/SMT/bin/smt-maintenance script. Whenever SMT is started with systemctl start smt.target, it checks the certificate. If a new CA certificate exists, it is copied again. Therefore, whenever the CA certificate is missing or changed, restart SMT using the systemctl restart smt.target command.

When the SMT Server module applies configuration changes, it checks for the existence of the common server certificate. If the certificate does not exist, YaST asks whether the certificate should be created. If the user confirms, the YaST CA Management module is started.

7.3.1 Certificate Expiration

The common server certificate SMT uses is valid for one year. After that time, a new certificate is needed. Either generate a new certificate using YaST CA Management module or import a new certificate using the YaST Common Server Certificate module. Both options are described in the following sections.

As long as the same CA certificate is used, there is no need to update certificates on the client machines. The generated CA certificate is valid for 10 years.

7.3.2 Creating a New Common Server Certificate

To create a new common server certificate with YaST, proceed as follows:

  1. Start YaST and select Security and Users › CA Management. Alternatively, start the YaST CA Management module from a command line by entering yast2 ca_mgm as root.

  2. Select the required CA and click Enter CA.

  3. Enter the password if entering a CA for the first time. YaST displays the CA key information in the Description tab.

  4. Click the Certificates tab (see Figure 7.1, “Certificates of a CA”) and select Add › Add Server Certificate.

    Certificates of a CA
    Figure 7.1: Certificates of a CA
  5. Enter the fully qualified domain name of the server as Common Name. Add a valid e-mail address of the server administrator. Other fields, such as Organization, Organizational Unit, Locality, and State are optional. Click Next to proceed.

    Important
    Important: Host Name in Server Certificate

    The server certificate must contain the correct host name. If the client requests server https://some.hostname/, then some.hostname must be part of the certificate. The host name must either be used as the Common Name, see Step 5, or as the Subject Alternative Name, see Step 7: DNS:some.hostname and IP:<ipaddress>.

  6. Enter a Password for the private key of the certificate and re-enter it in the next field to verify it.

  7. If you want to define a Subject Alternative Name, click Advanced Options, select Subject Alternative Name from the list and click Add.

    Important
    Important: Subject Alternative Name

    If Subject Alternative Name be in the server certificate, then it needs to contain the DNS entry. If Subject Alternative Name is present, the Common Name (CN) is not checked anymore.

  8. If you want to keep the default values for the other options, like Key Length and Valid Period, click Next. An overview of the certificate to be created is shown.

  9. Click Create to generate the certificate.

  10. To export the new certificate as the common server certificate, select it in the Certificates tab and select Export › Export as Common Server Certificate.

  11. After having created a new certificate, restart SMT using the systemctl restart smt.target command. Restarting SMT ensures that the new certificate is copied from /etc/ssl/certs/YaST-CA.pem to /srv/www/htdocs/smt.crt, the copy SMT uses. Restarting SMT also restarts the Web server.

For detailed information about managing certification and further usage of the YaST CA Management module and the Common Server Certificate module, refer to the Security Guide. It si available from https://www.suse.com/documentation/sles-12.

7.3.3 Importing a Common Server Certificate

You can import an own common server certificate from a file. The certificate to be imported needs to be in the PKCS12 format with CA chain. Common server certificates can be imported with the YaST Common Server Certificate module.

To import an own certificate with YaST, proceed as follows:

  1. Start YaST and select Security and Users › Common Server Certificate. Alternatively, start the YaST Common Server Certificate module from the command line by entering yast2 common_cert as root.

    The description of the currently used common server certificate is shown in the dialog that opens.

  2. Click Import and select the file containing the certificate to be imported. Specify the certificate password in the Password field.

  3. Click Next. If the certificate is successfully imported, close YaST with Finish.

  4. After having created a new certificate, restart SMT using the systemctl restart smt.target command. Restarting SMT ensures that the new certificate is copied from /etc/ssl/certs/YaST-CA.pem to /srv/www/htdocs/smt.crt, the copy SMT uses. Restarting SMT also restarts the Web server.

7.3.4 Synchronizing Time between SMT Server and Clients

The synchronization of time between the SMT server and clients is highly recommended. Each server certificate has a validity period. If the client happens to be set to a time outside of this period, the certificate validation on the client side fails.

Therefore, it is advisable to keep the time on the server and clients synchronized. You can easily synchronize the time using NTP (network time protocol). Use yast2 ntp-client to configure an NTP client. Find detailed information about NTP in the Administration Guide.

8 Configuring Clients to Use SMT

  • Filename: smt_client.xml
  • ID: smt.client

Any machine running SUSE Linux Enterprise 10 SP4, 11 SP1 or later, or any version of SUSE Linux Enterprise 12 can be configured to register against SMT and download software updates from there, instead of communicating directly with SUSE Customer Center or Novell Customer Center.

If your network includes an SMT server to provide a local update source, you need to equip the client with the server's URL. As client and server communicate via the HTTPS protocol during registration, you also need to make sure the client trusts the server's certificate. In case you set up your SMT server to use the default server certificate, the CA certificate will be available on the SMT server at http://FQDN/smt.crt .

If the certificate is not issued by a well-trusted authority, the registration process will import the certificate from the URL specified as regcert parameter (SUSE Linux Enterprise Server 10 and 11). For SLE 12, the certificate will be downloaded automatically from SMT. In this case, the client displays the new certificate details (its fingerprint), and you need to accept the certificate.

There are several ways to provide the registration information and to configure the client machine to use SMT:

  1. Provide the required information via kernel parameters at boot time (Section 8.1, “Using Kernel Parameters to Access an SMT Server”).

  2. Configure the clients using an AutoYaST profile (Section 8.2, “Configuring Clients with AutoYaST Profile”).

  3. Use the clientSetup4SMT.sh script (Section 8.3, “Configuring Clients with the clientSetup4SMT.sh Script in SLE 11 and 12”). This script can be run on a client to make it register against a specified SMT server.

  4. In SUSE Linux Enterprise 11 and 12, you can set the SMT server URL with the YaST registration module during installation (Section 8.4, “Configuring Clients with YaST”).

These methods are described in the following sections.

8.1 Using Kernel Parameters to Access an SMT Server

Important
Important: regcert Parameter Support

Note that the regcert kernel boot parameter is supported for SLE 10 and 11. It is not supported from SLE 12.

Any client can be configured to use SMT by providing the following kernel parameters during machine boot: regurl and regcert. The first parameter is mandatory, the latter is optional.

Warning
Warning: Beware of Typing Errors

Make sure the values you enter are correct. If regurl has not been specified correctly, the registration of the update source will fail.

If an invalid value for regcert has been entered, you will be prompted for a local path to the certificate. In case regcert is not specified, it will default to http://FQDN/smt.crt with FQDN being the name of the SMT server.

regurl

URL of the SMT server.

For SLE 11 and older clients, the URL needs to be in the following format: https://FQDN/center/regsvc/ with FQDN being the fully qualified host name of the SMT server. It must be identical to the FQDN of the server certificate used on the SMT server. Example:

regurl=https://smt.example.com/center/regsvc/

For SLE 12 clients, the URL needs to be in the following format: https://FQDN/connect/ with FQDN being the fully qualified host name of the SMT server. It must be identical to the FQDN of the server certificate used on the SMT server. Example:

regurl=https://smt.example.com/connect/
regcert

Location of the SMT server's CA certificate. Specify one of the following locations:

URL

Remote location (HTTP, HTTPS, or FTP) from which the certificate can be downloaded. Example:

regcert=http://smt.example.com/smt.crt
Floppy

Specifies a location on a floppy. The floppy needs to be inserted at boot time—you will not be prompted to insert it if it is missing. The value needs to start with the string floppy, followed by the path to the certificate. Example:

regcert=floppy/smt/smt-ca.crt
Local Path

Absolute path to the certificate on the local machine. Example:

regcert=/data/inst/smt/smt-ca.cert
Interactive

Use ask to open a pop-up menu during installation where you can specify the path to the certificate. Do not use this option with AutoYaST. Example:

regcert=ask
Deactivate Certificate Installation

Use done if either the certificate will be installed by an add-on product, or if you are using a certificate issued by an official certificate authority. Example:

regcert=done
Warning
Warning: Change of SMT Server Certificate

If the SMT server gets a new certificate from an untrusted CA, the clients need to retrieve the new CA certificate file.

On SLE 10 and 11, this is done automatically with the registration process in the following cases:

  • If a URL was used at installation time to retrieve the certificate.

  • If the regcert parameter was omitted and thus the default URL is used.

If the certificate was loaded using any other method, such as floppy or local path, the CA certificate will not be updated.

On SUSE Linux Enterprise Server 12, after the certificate has changed, YaST displays a dialog for importing a new certificate. If you confirm importing the new certificate, the old one is replaced with the new one.

8.2 Configuring Clients with AutoYaST Profile

Clients can be configured to register with SMT server via AutoYaST profile. For general information about creating AutoYaST profiles and preparing automatic installation, refer to the AutoYaST Guide. In this section, only SMT specific configuration is described.

To configure SMT specific data using AutoYaST, follow the steps for the relevant version of SMT client.

8.2.1 Configuring SUSE Linux Enterprise 11 Clients

  1. As root, start YaST and select Miscellaneous › Autoinstallation to start the graphical AutoYaST front-end.

    From a command line, you can start the graphical AutoYaST front-end with the yast2 autoyast command.

  2. Open an existing profile using File › Open, create a profile based on the current system's configuration using Tools › Create Reference Profile, or work with an empty profile.

  3. Select Software › Novell Customer Center Configuration. An overview of the current configuration is shown.

  4. Click Configure.

  5. Set the URL of the SMT Server and, optionally, the location of the SMT Certificate. The possible values are the same as for the kernel parameters regurl and regcert (see Section 8.1, “Using Kernel Parameters to Access an SMT Server”). The only exception is that the ask value for regcert does not work in AutoYaST, because it requires user interaction. If using it, the registration process will be skipped.

  6. Perform all other configuration needed for the systems to be deployed.

  7. Select File › Save As and enter a file name for the profile, such as autoinst.xml.

8.2.2 Configuring SUSE Linux Enterprise 12 Clients

  1. As root, start YaST and select Miscellaneous › Autoinstallation to start the graphical AutoYaST front-end.

    From a command line, you can start the graphical AutoYaST front-end with the yast2 autoyast command.

  2. Open an existing profile using File › Open, create a profile based on the current system's configuration using Tools › Create Reference Profile, or work with an empty profile.

  3. Select Software › Product Registration. An overview of the current configuration is shown.

  4. Click Edit.

  5. Check Register the Product, set the URL of the SMT server in Use Specific Server URL Instead of the Default, and you can set the Optional SSL Server Certificate URL. The possible values for the server URL are the same as for the kernel parameter regurl. For the SSL certificate location, you can use either HTTP or HTTPS based URLs.

  6. Perform all other configuration needed for the systems to be deployed, then click Finish to return to the main screen.

  7. Select File › Save As and enter a file name for the profile, such as autoinst.xml.

8.3 Configuring Clients with the clientSetup4SMT.sh Script in SLE 11 and 12

In SLE 11 and 12, the /usr/share/doc/packages/smt/clientSetup4SMT.sh script is provided together with SMT. This script allows you to configure a client machine to use an SMT server. It can also be used to reconfigure an existing client to use a different SMT server.

Note
Note: Installation of wget

The script clientSetup4SMT.sh itself uses wget, so wget must be installed on the client.

Important
Important: Upgrade clientSetup4SMT.sh

If you migrated your client OS from an older SUSE Linux Enterprise, check if the version of the clientSetup4SMT.sh script on your host is up to date. clientSetup4SMT.sh from older versions of SMT cannot manage SMT 12 clients. If you apply software patches regularly on your SMT server, you can always find the latest version of clientSetup4SMT.sh at <SMT_HOSTNAME>/repo/tools/clientSetup4SMT.sh.

To configure a client machine to use SMT with the clientSetup4SMT.sh script, follow these steps:

  1. Copy the clientSetup4SMT.sh script from your SMT server to the client machine. The script is available at <SMT_HOSTNAME>/repo/tools/clientSetup4SMT.sh and /srv/www/htdocs/repo/tools/clientSetup4SMT.sh. You can download it with a browser, using wget, or by another means, such as with scp.

  2. As root, execute the script on the client machine. The script can be executed in two ways. In the first case, the script name is followed by the registration URL. For example:

    ./clientSetup4SMT.sh https://smt.example.com/center/regsvc

    In the second case, the script uses the --host option followed by the host name of the SMT server, and --regcert followed by the URL of the SSL certificate; for example:

    ./clientSetup4SMT.sh --host smt.example.com \
      --regcert http://smt.example.com/certs/smt.crt

    In this case, without any namespace specified, the client will be configured to use the default production repositories. If --namespace GROUPNAME is specified, the client will use that staging group.

  3. The script downloads the server's CA certificate. Accept it by pressing Y.

  4. The script performs all necessary modifications on the client. However, the registration itself is not performed by the script.

  5. The script downloads and asks to accept additional GPG keys to sign repositories with.

  6. On SLE 11, perform the registration by executing suse_register or running the yast2 inst_suse_register module on the client.

    On SLE 12, perform the registration by executing

    SUSEConnect -p PRODUCT_NAME --url https://smt.example.org

    or running the yast2 registration (SUSE Linux Enterprise Server 12 SP1 and newer) or yast2 scc (SUSE Linux Enterprise Server 12) module on the client.

The clientSetup4SMT.sh script works with SUSE Linux Enterprise 10 SP2 and later Service Packs, SLE 11, and SLE 12 systems.

This script is also provided for download. You can get it by running

wget http://smt.example.com/repo/tools/clientSetup4SMT.sh
Important
Important: Extension and Module Registration in SUSE Linux Enterprise 12

When registering an existing system against SMT 12—both on the command line and using YaST—you need to register additional extensions and modules separately, one by one. This applies both to already installed extensions and to extensions that you plan to install.

8.3.1 Problems Downloading GPG Keys from the Server

The apache2-example-pages package includes a robots.txt file. The file is installed into the Apache2 document root directory, and controls how clients can access files from the Web server. If this package is installed on the server, clientSetup4SMT.sh fails to download the keys stored under /repo/keys.

You can solve this problem by either editing robots.txt, or uninstalling the apache2-example-pages package.

If you choose to edit the robots.txt file, add before the Disallow: / statement:

Allow: /repo/keys

8.4 Configuring Clients with YaST

8.4.1 Configuring Clients with YaST in SLE 11

To configure a client to perform the registration against an SMT server use the YaST registration module (yast2 inst_suse_register).

Click Advanced › Local Registration Server and enter the name of the SMT server plus the path to the registration internals (/center/regsvc/), for example:

https://smt.example.com/center/regsvc/

After confirmation the certificate is loaded and the user is asked to accept it. Then continue.

Warning
Warning: Staging Groups Registration

If a staging group is used, make sure that settings in /etc/suseRegister.conf are done accordingly. If not already done, modify the register= parameter and append &namespace=NAMESPACE. For more information about staging groups, see Section 4.3, “Staging Repositories”.

Alternatively, use the clientSetup4SMT.sh script (see Section 8.3, “Configuring Clients with the clientSetup4SMT.sh Script in SLE 11 and 12”).

8.4.2 Configuring Clients with YaST in SLE 12

To configure a client to perform the registration against an SMT server use the YaST Product Registration module yast2 registration (SUSE Linux Enterprise Server 12 SP1 or newer) or yast2 scc (SUSE Linux Enterprise Server 12).

On the client, the credentials are not necessary and you may leave the relevant fields empty. Click Local Registration Server and enter its URL. Then click Next until the exit from the module.

8.5 Registering SLE11 Clients against SMT Test Environment

To configure a client to register against the test environment instead of the production environment, modify /etc/suseRegister.conf on the client machine by setting:

register = command=register&namespace=testing

For more information about using SMT with a test environment, see Section 3.5, “Using the Test Environment”.

8.6 Registering SLE12 Clients against SMT Test Environment

To configure a client to register against the test environment instead of the production environment, modify /etc/SUSEConnect on the client machine by setting:

namespace: testing

For more information about using SMT with a test environment, see Section 3.5, “Using the Test Environment”.

8.7 Listing Accessible Repositories

To retrieve the accessible repositories for a client, download repo/repoindex.xml from the SMT server with the client's credentials. The credentials are stored in /etc/zypp/credentials.d/SCCcredentials (SUSE Linux Enterprise Server 12) or /etc/zypp/credentials.d/NCCcredentials (SUSE Linux Enterprise Server 11) on the client machine. Using wget, the command for testing could be as follows:

wget https://USER:PASS@smt.example.com/repo/repoindex.xml

repoindex.xml returns the complete repository list as they come from the vendor. If a repository is marked for staging, repoindex.xml lists the repository in the full namespace (repos/full/$RCE).

To get a list of all repositories available on the SMT server, use the credentials specified in the [LOCAL] section of /etc/smt.conf on the server as mirrorUser and mirrorPassword.

8.8 Online Migration of SUSE Linux Enterprise Clients

SUSE Linux Enterprise clients registered against SMT can be migrated online to the latest service pack of the same major release the same way as clients registered against SUSE Customer Center or Novell Customer Center. Before starting the migration, make sure that SMT is configured to provide the correct version of repositories to which you need the clients to migrate.

For detailed information on online migration, see https://www.suse.com/documentation/sles11/book_sle_deployment/data/cha_update_sle.html for SUSE Linux Enterprise 11 clients, or Chapter 16, Upgrading SUSE Linux Enterprise for SUSE Linux Enterprise 12 clients.

8.9 How to Update Red Hat Enterprise Linux with SMT

SMT enables customers that possess the required entitlements to mirror updates for Red Hat Enterprise Linux (RHEL). Refer to http://www.suse.com/products/expandedsupport/ for details on SUSE Linux Enterprise Server Subscription with Expanded Support. This section discusses the actions required to configure the SMT server and clients (RHEL servers) for this solution.

Note
Note: SUSE Linux Enterprise Server 10

Configuring RHEL client with Subscription Management Tool for SUSE Linux Enterprise (SMT 1.0) running SUSE Linux Enterprise Server 10 is slightly different. For more information, see How to update Red Hat Enterprise Linux with SMT.

8.9.1 How to Prepare SMT Server for Mirroring and Publishing Updates for RHEL

  1. Install SUSE Linux Enterprise Server (SLES) with the SMT packages as per the documentation on the respective products.

  2. During SMT setup, use organization credentials that have access to Novell-provided RHEL update repositories.

  3. Verify that the organization credentials have access to download updates for the Red Hat products with

    smt-repos -m | grep RES
  4. Enable mirroring of the RHEL update repositories for the desired architecture(s):

    smt-repos -e REPO-NAME ARCHITECTURE
  5. Mirror the updates and log verbose output:

    smt-mirror -d -L /var/log/smt/smt-mirror.log

    The updates for RHEL will also be mirrored automatically as part of the default nightly SMT mirroring cron job. When the mirror process of the repositories for your RHEL products has completed, the updates are available via

    http://smt-server.your-domain.top/repo/$RCE/REPOSITORY_NAME/ARCHITECTURE/
  6. To enable GPG checking of the repositories, the key used to sign the repositories needs to be made available to the RHEL clients. This key is now available in the res-signingkeys package, which is included in the SMT 11 installation source.

    • Install the res-signingkeys package with the command

      zypper in -y res-signingkeys
    • The installation of the package stores the key file as /srv/www/htdocs/repo/keys/res-signingkeys.key.

    • Now the key is available to the clients and can be imported into their RPM database as described later.

8.9.2 How to Configure the YUM Client on RHEL 5.2 to Receive Updates from SMT

  1. Import the repository signing key downloaded above into the local RPM database with

    rpm --import http://smt.example.com/repo/keys/res-signingkeys.key
  2. Create a file in /etc/yum.repos.d/ and name it RES5.repo.

  3. Edit the file and enter the repository data, and point to the repository on the SMT server as follows:

    [smt]
    name=SMT repository
    baseurl=http://smt.example.com/repo/$RCE/REPOSITORY_NAME/ARCHITECTURE/
    enabled=1
    gpgcheck=1

    Example of base URL:

    http://smt.mycompany.com/repo/$RCE/RES5/i386/
  4. Save the file.

  5. Disable standard Red Hat repositories by setting

    enabled=0

    in the repository entries in other files in /etc/yum.repos.d/ (if any are enabled).

    Both YUM and the update notification applet should work correctly now and notify of available updates when applicable. You may need to restart the applet.

8.9.3 How to Configure the UP2DATE Client on RHEL 3.9 and 4.7 to Receive Updates from SMT

  1. Import the repository signing key downloaded above into the local RPM database with

    rpm --import http://smt.example.com/repo/keys/res-signingkeys.key
  2. Edit the file /etc/sysconfig/rhn/sources and make the following changes:

  3. Comment out any lines starting with up2date.

    Normally, there will be a line that says "up2date default".

  4. Add an entry pointing to the SMT repository (all in one line):

    yum REPO_NAME http://smt.example.com/repo/$RCE/REPOSITORY_NAME/ARCHITECTURE/

    where repo-name should be set to RES3 for 3.9 and RES4 for 4.7.

  5. Save the file.

Both up2date and the update notification applet should work correctly now, pointing to the SMT repository and indicating updates when available. In case of trouble, try to restart the applet.

To ensure correct reporting of the Red Hat Enterprise systems in SUSE Customer Center, they need to be registered against your SMT server. For this a special suseRegisterRES package is provided through the RES* repositories and it should be installed, configured and executed as described below.

8.9.4 How to Register RHEL 5.2 against SMT

  1. Install the suseRegisterRES package.

    yum install suseRegisterRES
    Note
    Note: Additional Packages

    You may need to install the perl-Crypt-SSLeay and perl-XML-Parser packages from the original RHEL media.

  2. Copy the SMT certificate to the system:

    wget http://smt.example.com/smt.crt
    cat smt.crt >> /etc/pki/tls/cert.pem
  3. Edit /etc/suseRegister.conf to point to SMT by changing the URL value to

    url: https://smt.example.com/center/regsvc/

    or (for SUSE Customer Center)

    url = https://smt.example.com/connect/
  4. Register the system:

    suse_register

8.9.5 How to Register RHEL 4.7 and RHEL 3.9 against SMT

  1. Install the suseRegisterRES package:

    up2date --get suseRegisterRES
    up2date --get perl-XML-Writer
    rpm -ivh /var/spool/up2date/suseRegisterRES*.rpm /var/spool/up2date/perl-XML-Writer-0*.rpm
    Note
    Note: Additional Packages

    You may need to install the perl-Crypt-SSLeay and perl-XML-Parser packages from the original RHEL media.

  2. Copy the SMT certificate to the system:

    wget http://smt.example.com/smt.crt
    cat smt.crt >> /usr/share/ssl/cert.pem
  3. Edit /etc/suseRegister.conf to point to SMT by changing the URL value to

    url = https://smt.example.com/center/regsvc/

    or (for SUSE Customer Center)

    url = https://smt.example.com/connect/
  4. Register the system:

    suse_register

9 Advanced Topics

  • Filename: smt_advanced.xml
  • ID: smt.advanced

This chapter covers usage scenarios beyond the regular workflow to give you more control over your SMT server.

9.1 Backup of the SMT Server

Creating backups of the SMT server regularly can help restore it quickly and reliably if the server fails.

There are three main parts on the SMT server to back up:

  • Configuration files

  • Package repositories

  • The database

9.1.1 Configuration Files and Repositories

The SMT server configuration is stored in the /etc/smt.conf file and files in the /etc/smt.d directory.

As SMT depends on the services provided by the Apache Web server and Maria DB database engine, you need to back up their configuration files as well. Apache configuration files are located in the /etc/apache2 directory, while configuration of Maria DB is stored in /etc/my.cnf, /etc/mysqlaccess.conf, and files in the /etc/my.cnf.d directory.

Package repositories are stored in the /srv/www/htdocs/repo directory. While you can normally mirror the repositories on the restored server from the update server as well, the download can take a long time. Therefore backing up the repositories can save you time and bandwidth. Moreover, backing up the repositories is necessary if you are using repository staging and want to restore the snapshots of the repositories (see Section 3.6, “Testing and Filtering Update Repositories with Staging”).

Warning
Warning: Size of the Repositories

The software repositories can be significant in size, and you will need to transfer them from the update server.

Use your preferred tool to back up the configuration and repository files.

9.1.2 The Database

SMT uses the Maria DB database to store information about clients, registrations, machine data, which repositories are enabled for mirroring, and custom repositories. Unlike the configuration files and repositories, the database information cannot be recovered without a valid backup.

To back up the SMT database, you can for example create a cron job that performs an SQL dump into a plain text file:

mysqldump -u root -p SMT_DB_PASSWORD smt > /BACKUP_DIR/smt-db-backup.sql

Then add the resulting file to your regular backup jobs.

9.2 Disconnected SMT Servers

In some restricted environments it is not possible for SMT servers to access the Internet because they are located on disconnected or isolated networks. In this case, you can back up the relevant data on an external storage device using special parameters with the SMT commands.

You need an external SMT server that mirrors the repositories from SUSE Customer Center. Then you can transfer these repositories to the SMT servers on the isolated network using the external storage device.

SMT Disconnected Setup
Figure 9.1: SMT Disconnected Setup

Although the initial setup of this solution requires additional configuration, the regular update synchronization with SUSE Customer Center and distribution to isolated servers is simple. The steps required during the initial setup are as follows:

  • Installing and configuring the external SMT server

  • Installing the internal server

  • Editing /etc/smt.conf and setting up a cron job on the internal SMT server

  • Transferring the SUSE Customer Center data from the external SMT server to the internal server

  • Enabling and disabling repositories on the internal server

  • Creating an SMT database replacement file on the internal server—when performing the mirror jobs, the file can be used instead of the normal Maria DB database

Day-to-day operation requires the following actions:

  • Running the smt-mirror job on the external server

  • Synchronizing the mirrored repositories from the external storage device to the internal SMT server

Below is a detailed description of the individual steps.

Procedure 9.1: External SMT Server Configuration for the Disconnected Setup
  1. Install and configure SMT as described in Chapter 1, SMT Installation.

  2. Enable the repositories for use by the internal SMT servers.

  3. Perform a standard repository mirroring from SUSE Customer Center with smt-mirror.

  4. Attach a removable storage device to the server and mount it.

  5. Export the required SUSE Customer Center data to a directory on the mounted storage device:

    1. Create a directory with correct permissions for storing the data. Because the smt commands run as the smt user (whose numeric UID can differ between the servers), you need to make permissions for the directories on the external storage device less restrictive:

      chmod o+w /path/to/scc/dir/on/storage/device
    2. Export the SUSE Customer Center data:

      smt-sync --todir /path/to/scc/dir/on/storage/device
  6. Create a directory with correct permissions:

    mkdir /path/to/repository/on/storage/device
    chmod o+w /path/to/repository/on/storage/device
  7. Unmount and detach the storage device.

Procedure 9.2: Internal SMT Server Configuration for the Disconnected Setup
  1. Ensure you have a working SUSE Linux Enterprise Server installation source.

  2. Install SMT the same way as on the external server with the following exceptions:

    1. Select Generate new SCC credentials in the SCC Credentials dialog.

    2. Ignore the error message when running the synchronization script in the Writing SMT Configuration phase of the wizard.

    3. Abort the SUSE Customer Center Configuration wizard and click OK in the list of installed add-on products.

  3. Re-launch the YaST Subscription Management Tool Server Configuration module (yast2 smt-server) and go to the Scheduled SMT Jobs tab.

  4. Delete SCC Registration and Synchronization of Updates jobs.

  5. Click OK to finish the wizard, provide the SMT user password, and acknowledge the synchronization error again.

  6. Prevent registration data upstream synchronization to SUSE Customer Center by setting

    forwardRegistration = false

    in /etc/smt.conf.

  7. Connect an external storage device and mount it.

  8. Populate the SMT database with the previously created SUSE Customer Center data:

    smt-sync --fromdir /path/to/scc/dir/on/mobile/disk
  9. Enable mirroring of the desired repositories using the smt-repos -e command.

  10. Create a database replacement file on the external storage device:

    smt-sync --createdbreplacementfile /path/to/dbrepl/file/on/mobile/disk
  11. Unmount and detach the storage device.

Now the configuration of both the external and internal SMT servers is complete. However, the update repository is still empty. After you run the following daily operation routines for the first time, the repository will be synchronized, and the internal SMT server will be ready to serve clients.

Procedure 9.3: Daily External SMT Server Operation
  1. Connect an external storage device and mount it.

  2. Perform a mirror to a directory on the storage device based on the file stored on it:

    smt-mirror --dbreplfile /path/to/dbrepl/file/on/storage/device \
     --fromlocalsmt --directory /path/to/repository/on/storage/device \
     -L /var/log/smt/smt-mirror-example.log
  3. Update the database on the storage device with the product and subscription info from SUSE Customer Center:

    smt-sync --todir /path/to/scc/dir/on/storage/device
  4. Optionally, scan the storage device for viruses and other unwanted content.

  5. Unmount and disconnect the storage device.

Procedure 9.4: Daily Internal SMT Server Operation
  1. Connect a storage device and mount it.

  2. Update the SUSE Customer Center data on the server:

    smt-sync --fromdir /path/to/scc/dir/on/storage/device
  3. Mirror from the storage device to the server:

    smt-mirror --fromdir /path/to/repository/on/storage/device
  4. Update the SUSE Customer Center data on the storage device with local changes in the mirror status since the last synchronization:

    smt-sync --createdbreplacementfile /path/to/dbrepl/file/on/storage/device
  5. Unmount and disconnect the storage device.

A SMT REST API

  • Filename: smt_appendix_api.xml
  • ID: smt.app.api

The SMT REST interface is meant for communication with SMT clients and integration into other Web services. The base URI for all the following REST calls is https://YOURSMTSERVER/=/1. The SMT server responds with XML data described for each call by an RNC snippet with comments.

Quick Reference
Note
Note: API for authenticating SMT clients.

Used internally in the smt-client package. Not intended for general administrative use!

GET /jobs                            get list of all jobs for client
GET /job/@next                       get the next job for client
GET /job/<jobid>                     get job with jobid for client.
                                     Note: this marks the job as retrieved
PUT /job/<jobid>                     update job having <jobid> using XML data.
                                     Note: updates only retrieved jobs

For backward compatibility reasons, the following are also available:

GET /jobs/@next                      same as GET /job/@next
GET /jobs/<jobid>                    same as GET /job/<jobid>
PUT /jobs/<jobid>                    same as PUT /job/<jobid>

API for general access (this needs authentication using credentials from the [REST] section of smt.conf).

GET /client                          get data of all clients
GET /client/<GUID>                   get data of client with specified GUID
GET /client/<GUID>/jobs              get client's job data
GET /client/<GUID>/patchstatus       get client's patch status
GET /client/<GUID>/job/@next         get client's next job
GET /client/<GUID>/job/<jobid>       get specified client job data
GET /client/@all/jobs                get job data of all clients
GET /client/@all/patchstatus         get patch status of all clients
GET /repo                            get all repositories known to SMT
GET /repo/<repoid>                   get details of repository with <repoid>
GET /repo/<repoid>/patches           get repository's patches
GET /patch/<patchid>                 get patch <patchid> details
GET /product                         get list of all products known to SMT
GET /product/<productid>             get details of product with <productid>
GET /product/<productid>/repos       get list of product's repositories

For backward compatibility reasons, plural forms are also available; for example:

GET /clients                         same as GET /client
GET /repos                           same as GET /repo
GET /product                         same as GET /product
Detailed Description

API for authenticating clients:

GET /jobs

Get list of all jobs for an authenticating client. When getting the jobs via this path they will not be set to the status retrieved.

Example:

<jobs>
  <job name="Patchstatus Job" created="2010-06-18 16:34:38" description="Patchstatus Job for Client 456" exitcode="" expires="" finished="" guid="456" guid_id="30" id="31" message="" parent_id="" persistent="1" retrieved="" status="0" stderr="" stdout="" targeted="" timelag="23:00:00" type="1" verbose="0">
    <arguments></arguments>
  </job>
  <job name="Software Push" created="2010-06-18 16:37:59" description="Software Push: mmv, whois" exitcode="" expires="" finished="" guid="456" guid_id="30" id="32" message="" parent_id="" persistent="0" retrieved="" status="0" stderr="" stdout="" targeted="" timelag="" type="2" verbose="0">
    <arguments>
      <packages>
        <package>mmv</package>
        <package>whois</package>
      </packages>
    </arguments>
  </job>
  <job name="Update Job" created="2010-06-18 16:38:39" description="Update Job" exitcode="" expires="" finished="" guid="456" guid_id="30" id="34" message="" parent_id="" persistent="0" retrieved="" status="0" stderr="" stdout="" targeted="" timelag="" type="3" verbose="0">
    <arguments></arguments>
  </job>
  <job name="Execute" created="2010-06-18 17:40:10" description="Execute custom command" exitcode="0" expires="" finished="2010-06-18 17:40:14" guid="456" guid_id="30" id="41" message="execute successfully finished" parent_id="" persistent="0" retrieved="2010-06-18 17:40:14" status="1" stderr="man:x:13:62:Manual pages viewer:/var/cache/man:/bin/bash" stdout="" targeted="" timelag="" type="4" verbose="1">
   <arguments command="grep man /etc/passwd" />
  </job>
  <job name="Reboot" created="2010-06-18 16:40:28" description="Reboot now" exitcode="" expires="2011-06-12 15:15:15" finished="" guid="456" guid_id="30" id="37" message="" parent_id="" persistent="0" retrieved="" status="0" stderr="" stdout="" targeted="2010-06-12 15:15:15" timelag="" type="5" verbose="0">
    <arguments></arguments>
  </job>
  <job name="Wait 5 sec. for exit 0." created="2010-06-18 16:40:59" description="Wait for 5 seconds and return with value 0." exitcode="" expires="" finished="" guid="456" guid_id="30" id="38" message="" parent_id="" persistent="0" retrieved="" status="0" stderr="" stdout="" targeted="" timelag="" type="7" verbose="0">
    <arguments exitcode="0" waittime="5" />
  </job>
  <job name="Eject job" created="2010-06-18 16:42:00" description="Job to eject the CD/DVD drawer" exitcode="" expires="" finished="" guid="456" guid_id="30" id="39" message="" parent_id="" persistent="0" retrieved="" status="0" stderr="" stdout="" targeted="" timelag="" type="8" verbose="0">
    <arguments action="toggle" />
  </job>
</jobs>
GET /jobs/@next

Get the next job for an authenticating client. The job will not be set to the retrieved status.

Example:

<job id="31" guid="456" type="patchstatus" verbose="false">
  <arguments></arguments>
</job>
GET /jobs/<jobid>

Get a job with the specified jobid for an authenticating client. The job will be set to the retrieved status.

When the client retrieves a job, not all the metadata is part of the XML response. However, it can be the full set of metadata, as smt-client only picks the data that is relevant. But a job retrieval should only contain the minimal set of data that is required to fulfill it.

RNC:

start = element job {
  attribute id {xsd:integer},         # the job ID. A job id alone is not unique.
                                      # A job is only uniquely identified with
                                      # guid and id. The same jobs for multiple
                                      # clients have the same job id.
  attribute parent_id {xsd:integer}?, # ID of the job on which this job depends
  attribute guid {xsd:string},
  attribute guid_id {xsd:integer}?,   # internal database ID of the client
                                      # (for compatibility reasons, if third
                                      # party application talks to SMT REST
                                      # service).
  attribute type {                    # job type ID string. Must be unique and
                                      # equal to the name of the Perl module on
                                      # the client.
    "softwarepush",
    "patchstatus",
    "<custom>"                        # add your own job types
  },
  attribute name {xsd:string},        # short custom name of the job, user-defined
  attribute description {xsd:string}, # custom description of what the job does
  attribute created {xsd:string},     # time stamp of creation
  attribute expires {xsd:string},     # expiration time stamp; the job expires
                                      # if not retrieved by then
  attribute finished {xsd:string},    # time stamp of job completion
  attribute retrieved {xsd:string},   # time stamp of retrieval of the job
  attribute persistent {xsd:boolean}?, # defines whether the job is a persistent
                                      # (repetitive) job
  attribute verbose {xsd:boolean},    # if true, output of job commands is
                                      # attached to the result
  attribute exitcode {xsd:integer},   # the last exit code of the system command
                                      # executed to complete the job
  attribute message {xsd:string},     # custom human-readable message the client
                                      # sends back as a result
  attribute status {                  # logical status of the job
    0,     # not yet worked on: The job may be already retrieved but no
           # result was sent back yet.
    1,     # success: The job was retrieved, processed and the client sent
           # back a success response.
    2,     # failed: The job was retrieved, processed and the client sent
           # back a failure response.
    3},    # denied by client: The job was retrieved but could not be
           # processed as the client denied to process this job type
           # (a client needs to allow all job types that should be processed,
           # any other will be denied).
  attribute stderr {text},            # standard error output of jobs's system
                                      # commands (filled if verbose)
  attribute stdout {text},            # standard output of jobs's system
                                      # commands (filled if verbose)
  attribute targeted {xsd:string},    # time stamp when this job will be
                                      # delivered at the earliest
  attribute timelag {xsd:string}?,    # interval time of a persistent job in
                                      # the format "HH:MM:SS" (HH can be
                                      # bigger than 23)
  element-arguments                   # job-type-specific XML data
}

Example (minimal job definition for a 'softwarepush' job):

<job id="32" guid="456" type="softwarepush" verbose="false">
  <arguments>
    <packages>
      <package>mmv</package>
      <package>whois</package>
    </packages>
  </arguments>
</job>
PUT /job/<jobid>

Update a job for an authenticating client using XML data.

A client can only send job results for jobs properly retrieved previously. The jobs will be set to status done (except for persistent jobs, in which case a new target time will be computed).

Examples:

  • Example for a successful patchstatus job:

    <job id="31" guid="abc123" exitcode="0" message="0:0:0:0 # PackageManager=0 Security=0 Recommended=0 Optional=0" status="1" stderr="" stdout="" />
  • Example for a failed softwarepush:

    <job id="32" guid="abc123" exitcode="104" message="softwarepush failed" status="2" stderr="" stdout="" />
  • Example for a successful update:

    <job id="34" guid="abc123" exitcode="0" message="update successfully finished" status="1" stderr="" stdout="" />
  • Example for a successful reboot job:

    <job id="37" guid="abc123" exitcode="0" message="reboot triggered" status="1" stderr="" stdout="" />
  • Execute for a successful wait job:

    <job id="38" guid="abc123" exitcode="0" message="wait successfully finished" status="1" stderr="" stdout="" />
  • Example for a successful eject job:

    <job id="39" guid="abc123" exitcode="0" message="eject successfully finished" status="1" stderr="" stdout="" />
  • Example for a successful execute job:

    <job id="41" guid="abc123" exitcode="0" message="execute successfully finished" status="1" stderr="man:x:13:62:Manual pages viewer:/var/cache/man:/bin/bash" stdout="" />

API for general access:

GET /repo/<repoid>

Returns detailed information about the specified repository. The <repoid> can be obtained using the /repos or /products/<productid>/repos/ call.

RNC:

start = element repo {                     # repository
  attribute id {xsd:integer},              # SMT ID of the repository
  attribute name {xsd:string},             # repository's Unix name
  attribute target {xsd:string},           # repository's target product
  attribute type {"nu" | "yum" | "zypp" | "pum"}, # type of repository
  element description {xsd:string},        # description of the repository
  element localpath {xsd:string},          # path to local SMT mirror of the
                                           # repository
  element url {xsd:anyURI},                # original URL of the repository
  element mirrored {
    attribute date {xsd:integer}           # timestamp of the last successful
                                           # mirror (empty if not mirrored yet)
  }
}

Example:

<repo name="SLES10-SP2-Updates" id="226" target="sles-10-i586" type="nu">
  <description>SLES10-SP2-Updates for sles-10-i586</description>
  <localpath>/local/htdocs/repo/$RCE/SLES10-SP2-Updates/sles-10-i586</localpath>
  <mirrored date="1283523440"/>
  <url>https://nu.novell.com/repo/$RCE/SLES10-SP2-Updates/sles-10-i586/</url>
</repo>
GET /repo/<repoid>/patches

Returns a list of all patches in the specified software repository. The repoid can be obtained using the /repos or /products/<productid>/repos/ call.

RNC:

start = element patches {
  element patch {
    attribute id {xsd:integer},                # SMT ID of the patch
    attribute name {xsd:string},               # patch's Unix name
    attribute version {xsd:integer}            # patch's version number
    attribute category {                       # patch importance category
      "security",
      "recommended",
      "optional",
      "mandatory"}
  }*
}

Example:

<patches>
  <patch name="slesp2-krb5" category="security" id="1471" version="6775"/>
  <patch name="slesp2-heartbeat" category="recommended" id="1524" version="5857"/>
  <patch name="slesp2-curl" category="security" id="1409" version="6402"/>
  ...
</patches>
GET /repos

Returns a list of all software repositories known to SMT. Those which are currently mirrored on SMT have non-empty mirror time stamp in the mirrored attribute.

RNC:

start = element repos {
  element repo {
    attribute id {xsd:integer},        # SMT ID of the repository
    attribute name {xsd:string},       # repository's Unix name
    attribute target {xsd:string},     # repository's target product
    attribute mirrored {xsd:integer}   # time stamp of the last successful mirror
                                       # (empty if not mirrored yet)
  }*
}

Example:

<repos>
  <repo name="SLE10-SDK-Updates" id="1" mirrored="" target="sles-10-x86_64"/>
  <repo name="SLE10-SDK-SP3-Pool" id="2" mirrored="" target="sles-10-ppc"/>
  <repo name="SLES10-SP2-Updates" id="226" mirrored="1283523440" target="sles-10-i586"/>
  ...
</repo>
GET /patch/<patchid>

Returns detailed information about the specified patch. The patchid can be obtained via the /repo/<repoid>/patches call.

RNC:

start = element patch {
  attribute id {xsd:integer},            # SMT ID of the patch
  attribute name {xsd:string},           # patch's Unix name
  attribute version {xsd:integer},       # patch's version number
  attribute category {                   # patch importance category
    "security",
    "recommended",
    "optional",
    "mandatory"},
  element title {xsd:string},            # title of the patch
  element description {text},            # description of issues fixed by the patch
  element issued {
    attribute date {xsd:integer}         # patch release time stamp
  },
  element packages {                     # packages which need update as part
                                         # of this patch
    element package {                    # individual RPM package data
      attribute name {xsd:string},       # package name
      attribute epoch {xsd:integer},     # epoch number
      attribute version {xsd:string},    # version string
      attribute release {xsd:string},    # release string
      attribute arch {xsd:string},       # architecture string
      element origlocation {xsd:anyURI}, # URL of the RPM package in the
                                         # original repository
      element smtlocation {xsd:anyURI}   # URL of the RPM package at the SMT server
    }*
  },
  element references {                   # references to issues fixed by this
                                         # patch
    element reference {                  # individual reference details
      attribute id,                      # ID number of the issue (bugzilla
                                         # or CVE number)
      attribute title {xsd:string},      # issue title
      attribute type {"bugzilla","cve"}, # type of the issue
      attribute href {xsd:anyURI}        # URL of the issue in its issue
                                         # tracking system
    }*
  }
}

Example:

<patch name="slesp2-krb5" category="security" id="1471" version="6775">
  <description>
    Specially crafted AES and RC4 packets could allow unauthenticated
    remote attackers to trigger an integer overflow leads to heap memory
    corruption (CVE-2009-4212). This has been fixed.
    Specially crafted AES and RC4 packets could allow
    unauthenticated remote attackers to trigger an integer
    overflow leads to heap memory corruption (CVE-2009-4212).
  </description>
  <issued date="1263343020"/>
  <packages>
    <package name="krb5" arch="i586" epoch="" release="19.43.2" version="1.4.3">
      <origlocation>https://nu.novell.com/repo/$RCE/SLES10-SP2-Updates/sles-10-i586/rpm/i586/krb5-1.4.3-19.43.2.i586.rpm</origlocation>
      <smtlocation>http://kompost.suse.cz/repo/$RCE/SLES10-SP2-Updates/sles-10-i586/rpm/i586/krb5-1.4.3-19.43.2.i586.rpm</smtlocation>
    </package>
    <package name="krb5-apps-servers" arch="i586" epoch="" release="19.43.2" version="1.4.3">
      <origlocation>https://nu.novell.com/repo/$RCE/SLES10-SP2-Updates/sles-10-i586/rpm/i586/krb5-apps-servers-1.4.3-19.43.2.i586.rpm</origlocation>
      <smtlocation>http://kompost.suse.cz/repo/$RCE/SLES10-SP2-Updates/sles-10-i586/rpm/i586/krb5-apps-servers-1.4.3-19.43.2.i586.rpm</smtlocation>
    </package>
    ...
  </packages>
  <references>
    <reference id="535943" href="https://bugzilla.suse.com/show_bug.cgi?id=535943" title="bug number 535943" type="bugzilla"/>
    <reference id="CVE-2009-4212" href="http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-4212" title="CVE-2009-4212" type="cve"/>
  </references>
  <title>Security update for Kerberos 5</title>
</patch>
GET /products

Returns list of all products known to SMT.

RNC:

start element products {
  element product {
    attribute id {xsd:integer},      # SMT ID of the product
    attribute name {xsd:string},     # Unix name of the product
    attribute version {xsd:string},  # version string
    attribute rel {xsd:string},      # release string
    attribute arch {xsd:string},     # target machine architecture string
    attribute uiname {xsd:string}    # name of the product to be
                                     # displayed to users
  }*
}

Example:

<products>
  <product name="SUSE_SLED" arch="x86_64" id="1824" rel="" uiname="SUSE Linux Enterprise Desktop 11 SP1" version="11.1"/>
  <product name="SUSE_SLES" arch="i686" id="1825" rel="" uiname="SUSE Linux Enterprise Server 11 SP1" version="11.1"/>
  <product name="sle-hae" arch="i686" id="1880" rel="" uiname="SUSE Linux Enterprise High Availability Extension 11 SP1" version="11.1"/>
  <product name="SUSE-Linux-Enterprise-Thin-Client" arch="" id="940" rel="SP1" uiname="SUSE Linux Enterprise 10 Thin Client SP1" version="10"/>
  ...
</products>
GET /product/<productid>

Returns information about the specified product. The productid can be obtained from data returned by the /products call.

RNC:

start = element product {
  attribute id {xsd:integer},       # SMT ID of the product
  attribute name {xsd:string},      # Unix name of the product
  attribute version {xsd:string},   # version string
  attribute rel {xsd:string},       # release string
  attribute arch {xsd:string},      # target machine architecture string
  attribute uiname {xsd:string}     # name of the product to be displayed
                                    # to users
}

Example:

<product name="SUSE_SLED" arch="x86_64" id="1824" rel="" uiname="SUSE Linux Enterprise Server 11 SP1" version="11.1"/>
GET /product/<productid>/repos

Returns the list of all software repositories for the specified product. The productid can be obtained from the data returned by the /products call.

RNC:

See the /repos call.

Example:

<repos>
  <repo name="SLED11-SP1-Updates" id="143" mirrored="" target="sle-11-x86_64"/>
  <repo name="SLE11-SP1-Debuginfo-Pool" id="400" mirrored="" target="sle-11-x86_64"/>
  <repo name="SLED11-Extras" id="417" mirrored="" target="sle-11-x86_64"/>
  <repo name="SLED11-SP1-Pool" id="215" mirrored="" target="sle-11-x86_64"/>
  <repo name="nVidia-Driver-SLE11-SP1" id="469" mirrored="" target=""/>
  <repo name="ATI-Driver-SLE11-SP1" id="411" mirrored="" target=""/>
  <repo name="SLE11-SP1-Debuginfo-Updates" id="6" mirrored="" target="sle-11-x86_64"/>
</repos>

B Documentation Updates

  • Filename: smt_docupdates.xml
  • ID: app.smt.docupdates

This chapter lists content changes for this document.

This manual was updated on the following dates:

B.1 September 2017 (Initial Release of SUSE Linux Enterprise Desktop 12 SP3)

General

B.2 April 2017 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP2)

Bugfixes

B.3 November 2016 (Initial Release of SUSE Linux Enterprise Desktop 12 SP2)

General
  • The e-mail address for documentation feedback has changed to doc-team@suse.com.

  • The documentation for Docker has been enhanced and renamed to Docker Guide.

General Updates to this Guide
About This Guide
  • Replaced the introductory text with a more descriptive one, plus added a schema.

Chapter 1, SMT Installation
Chapter 3, Mirroring Repositories on the SMT Server
Chapter 4, Managing Repositories with YaST SMT Server Management
Chapter 5, Managing Client Machines with SMT
Chapter 7, SMT Tools and Configuration Files
Bugfixes

B.4 March 2016 (Maintenance Release of SUSE Linux Enterprise Desktop 12 SP1)

Chapter 1, SMT Installation

Fixed typos: my.conf.rpmnew to my.cnf.rpmnew and my.conf to my.cnf (https://bugzilla.suse.com/show_bug.cgi?id=964121).

B.5 December 2015 (Initial Release of SUSE Linux Enterprise Desktop 12 SP1)

General
  • SMT Guide is now part of the documentation for SUSE Linux Enterprise Desktop.

  • Add-ons provided by SUSE have been renamed as modules and extensions. The manuals have been updated to reflect this change.

  • Numerous small fixes and additions to the documentation, based on technical feedback.

  • The registration service has been changed from Novell Customer Center to SUSE Customer Center.

  • In YaST, you will now reach Network Settings via the System group. Network Devices is gone (https://bugzilla.suse.com/show_bug.cgi?id=867809).

Chapter 1, SMT Installation
Chapter 3, Mirroring Repositories on the SMT Server
Chapter 7, SMT Tools and Configuration Files
Chapter 8, Configuring Clients to Use SMT
Bugfixes
SUSE Linux Enterprise Desktop 12 SP3

Quick Start Manuals

Publication Date: May 07, 2018

Copyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

SUSE Linux Enterprise Desktop 12 SP3

Installation Quick Start

SUSE Linux Enterprise Desktop 12 SP3

Lists the system requirements and guides you step-by-step through the installation of SUSE Linux Enterprise Desktop from DVD, or from an ISO image.

Use the following procedures to install a new version of SUSE® Linux Enterprise Desktop 12 SP3. This document gives a quick overview on how to run through a default installation of SUSE Linux Enterprise Desktop for the AMD64/Intel 64 architecture.

Publication Date: May 07, 2018

1 Welcome to SUSE Linux Enterprise Desktop

For more detailed installation instructions and deployment strategies, see the SUSE Linux Enterprise Desktop Documentation at http://www.suse.com/documentation/.

1.1 Minimum System Requirements

  • any AMD64/Intel* EM64T processor (32-bit processors are not supported)

  • 512 MB physical RAM (1 GB or more recommended)

  • 3.5 GB available disk space (more recommended)

  • 800 x 600 display resolution (1024 x 768 or higher recommended)

1.2 Installing SUSE Linux Enterprise Desktop

Use these instructions if there is no existing Linux system on your machine, or if you want to replace an existing Linux system.

  1. Insert the SUSE Linux Enterprise Desktop DVD into the drive, then reboot the computer to start the installation program. On machines with a traditional BIOS you will see the graphical boot screen shown below. On machines equipped with UEFI, a slightly different boot screen is used. Secure boot on UEFI machines is supported.

    Use F2 to change the language for the installer. A corresponding keyboard layout is chosen automatically. See Section 3.2.1.1, “The Boot Screen on Machines Equipped with Traditional BIOS” or Section 3.2.1.2, “The Boot Screen on Machines Equipped with UEFI” for more information about changing boot options.

  2. Select Installation on the boot screen, then press Enter. This boots the system and loads the SUSE Linux Enterprise Desktop installer.

  3. The Language and Keyboard Layout are initialized with the language settings you have chosen on the boot screen. Change them here, if necessary.

    Read the License Agreement. It is presented in the language you have chosen on the boot screen. License Translations are available. You need to accept the agreement by checking I Agree to the License Terms to install SUSE Linux Enterprise Desktop. Proceed with Next.

  4. A system analysis is performed, where the installer probes for storage devices, and tries to find other installed systems. If the network could not be automatically configured while starting the installation system, the Network Settings dialog opens.

    After at least one network interface has been configured you can register your system at the SUSE Customer Center (SCC). Enter the e-mail address associated with your SCC account and the registration code for SUSE Linux Enterprise Desktop. A successful registration is a prerequisite for getting product updates and being entitled to technical support. Proceed with Next.

    Tip
    Tip: Installing Product Patches at Installation Time

    If SUSE Linux Enterprise Desktop has been successfully registered at the SUSE Customer Center, you are asked whether to install the latest available online updates during the installation. If choosing Yes, the system will be installed with the most current packages without having to apply the updates after installation. Activating this option is recommended.

    Note
    Note: Release Notes

    From this point on, the Release Notes can be viewed from any screen during the installation process by selecting Release Notes.

  5. After the system is successfully registered, YaST lists modules and extensions that are available for SUSE Linux Enterprise Desktop from the SUSE Customer Center. The list contains free modules, such as the SUSE Linux Enterprise SDK, or extensions requiring a registration key that is liable for costs. Click an entry to see its description. Optionally select a module or extension for installation by activating its check mark. Proceed with Next.

    Extension Selection
  6. The Add-on Product dialog allows you to add additional software sources (so-called repositories) to SUSE Linux Enterprise Desktop, that are not provided by the SUSE Customer Center. Such add-on products may include third-party products and drivers or additional software for your system.

    Tip
    Tip: Adding Drivers During the Installation

    You can also add driver update repositories via the Add-On Products dialog. Driver updates for SUSE Linux Enterprise are provided at http://drivers.suse.com/. These drivers have been created via the SUSE SolidDriver Program.

    If you want to skip this step, proceed with Next. Otherwise activate I would like to Install an Add-on Product. Specify a media type, a local path or a network resource hosting the repository and follow the on-screen instructions.

    Check Download Repository Description Files to download the files describing the repository now. If deactivated, they will be downloaded after the installation has started. Proceed with Next and insert a medium if required. Depending on the product's content it may be necessary to accept additional license agreements. Proceed with Next. If you have chosen an add-on product requiring a registration key, you will be asked to enter it at the Extension and Module Registration Codes page.

  7. Review the partition setup proposed by the system. If necessary, change it. You have the following options:

    Edit Proposal Settings

    Lets you change options for the proposed settings, but not the suggested partition layout itself.

    Create Partition Setup

    Select a disk to which to apply the proposal.

    Expert Partitioner

    Opens the Expert Partitioner described in Section 9.1, “Using the YaST Partitioner”.

    To accept the proposed setup without any changes, choose Next to proceed.

  8. Select the clock and time zone to use in your system. To manually adjust the time or to configure an NTP server for time synchronization, choose Other Settings. See Section 3.10, “Clock and Time Zone” for detailed information. Proceed with Next.

  9. To create a local user, type the first and last name in the User’s Full Name field, the login name in the Username field, and the password in the Password field.

    The password should be at least eight characters long and should contain both uppercase and lowercase letters and numbers. The maximum length for passwords is 72 characters, and passwords are case-sensitive.

    For security reasons it is also strongly recommended not to enable the Automatic Login. You should also not Use this Password for the System Administrator but rather provide a separate root password in the next installation step. Proceed with Next.

  10. Type a password for the system administrator account (called the root user).

    You should never forget the root password! After you entered it here, the password cannot be retrieved. See Section 3.12, “Password for the System Administrator root for more information. Proceed with Next.

  11. Use the Installation Settings screen to review and—if necessary—change several proposed installation settings. The current configuration is listed for each setting. To change it, click the headline. Some settings, such as firewall or SSH can directly be changed by clicking the respective links.

    Tip
    Tip: Remote Access

    Changes you can make here, can also be made later at any time from the installed system. However, if you need remote access directly after the installation, you should adjust the Firewall and SSH settings according to your needs.

    Software

    The default scope of software includes the base system and X Window with the GNOME desktop. Clicking Software opens the Software Selection and System Tasks screen, where you can change the software selection by selecting or deselecting patterns. Each pattern contains several software packages needed for specific functions (for example, Web and LAMP server or a print server). For a more detailed selection based on software packages to install, select Details to switch to the YaST Software Manager. See Chapter 10, Installing or Removing Software for more information.

    Booting

    This section shows the boot loader configuration. Changing the defaults is only recommended if really needed. Refer to Chapter 13, The Boot Loader GRUB 2 for details.

    Firewall and SSH

    By default, the Firewall is enabled with the active network interface configured for the external zone. See Section 15.4, “SuSEFirewall2” for configuration details.

    The SSH service is disabled by default, its port (22) is closed. Therefore logging in from remote is not possible by default. Click enable and open to toggle these settings.

    Default Systemd Target and Services

    By default, the system boots into the graphical target, with network, multiuser and display manager support. Switch to multi-user if you do not need to log in via display manager.

    System

    View detailed hardware information by clicking System. In the resulting screen you can also change Kernel Settings—see Section 3.13.6, “System Information for more information.

  12. After you have finalized the system configuration on the Installation Settings screen, click Install. Depending on your software selection you may need to agree to license agreements before the installation confirmation screen pops up. Up to this point no changes have been made to your system. After you click Install a second time, the installation process starts.

  13. During the installation, the progress is shown in detail on the Details tab.

  14. After the installation routine has finished, the computer is rebooted into the installed system. Log in and start YaST to fine-tune the system. If you are not using a graphical desktop or are working from remote, refer to Chapter 5, YaST in Text Mode for information on using YaST from a terminal.

2 Legal Notice

  • Filename: common_copyright_quick.xml
  • ID: no ID found

Copyright© 2006– 2018 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors, nor the translators shall be held liable for possible errors or the consequences thereof.

Print this page